Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Looking for VPS owners suffering with high memory usage and disk swapping.
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Looking for VPS owners suffering with high memory usage and disk swapping.

sudosudo Member
edited July 2017 in General

If you are hitting memory limits on your VPS and/or experience heavy swapping. Please respond below.

What have you done so far to address this? Obviously we can just add more RAM but that's no fun!

Comments

  • akhfaakhfa Member

    Reboot

    Thanked by 1sudo
  • Is this just a general question or specific to some host or use case?

  • sudosudo Member
    edited July 2017

    @akhfa said:
    Reboot

    Not running websites?

    Instead of reboot you can do the same with a cron of:

    sync; echo 3 > /proc/sys/vm/drop_caches

    But both reboot and drop_caches are very extreme. ha

  • ewrekewrek Member

    Just means that the software that used it all is shit.

    Thanked by 1scaveney
  • sudosudo Member

    @scaveney said:
    Is this just a general question or specific to some host or use case?

    Sorry I could have been a bit more clear.

    ...Just in general. I've seen a lot of users just upgrade ram right away. I'm curious what users who are more focus on lowend specs would do first in the same situation of a webserver running out of usable memory.

  • trim the fat. stop trying to run stuff on 128mb KVMs.

    swap out apache prefork for anything else.

    or Get an overzold type ovz instead and consolidate.

    Thanked by 1scaveney
  • scaveneyscaveney Member
    edited July 2017

    @akhfa said:
    Reboot

    Sorry, this isn't Windows 98. You don't have memory leaks in your kernel. If your applications are leaking memory, you can restart the application (even though you should fix it). Reboot is never an option.

    @sudo said:
    Instead of reboot you can do the same with a cron of:

    sync; echo 3 > /proc/sys/vm/drop_caches

    But both reboot and drop_caches are very extreme. ha

    Dropping caches is not only useless but actually harmful to performance. The kernel has three main caches affected by this knob:

    • pagecache, keeps pages that are unused but already mapped - when a mapping request is made, it can be filled from pagecache which reduces allocation overhead. No need to update page tables means less context switching and delay on allocation.

    • dentry - a dentry is the kernel structure representing an actual file/folder name on disk. It consists of the string holding the name and a pointer to an inode representing the actual data in the file / metadata of a folder

    • inode - an inode is the reference to actual data on the disk, both files and directories. inodes are unique and can have multiple dentrys referencing them - this is how hard links are implemented (two or more files referencing same blocks on disk)

    Since these are all very commonly used kernel structures, caching them is a major source of performance in both the mm (memory manager) and vfs (virtual filesystem) subsystems. Dropping these caches means that doing memory allocations or accessing commonly used files & folders will have to drop back to slow paths. Not only will these syscalls be significantly slower but they will immediately recache data that's accessed since the caches are now empty. So, by dropping caches you effectively hindered performance and freed a chunk of memory that was already technically free. When an application needs memory and there is an insufficient amount of "free" memory available to fulfill that request, the allocator begins to free cached items until it can satisfy the request. Since this memory is already mapped you skip the allocator overhead, which is a performance increase.

    So really, if you count "free memory" as just "free memory" parameter alone, you aren't doing it properly. It should be treated as "free memory + cached memory" since all that space is available for use at any time, immediately.

    @ewrek said:
    Just means that the software that used it all is shit.

    Most of the time yes, either shit or misconfigured. Most workloads require very little memory... if efficient programming was still practiced and we didn't have a culture of building software using layers upon layers of garbage abstraction, servers with 16GB would still be considered massive.

    @vimalware said:
    trim the fat. stop trying to run stuff on 128mb KVMs.

    swap out apache prefork for anything else.

    or Get an overzold type ovz instead and consolidate.

    128 MB KVM is fine for a lot of distros, and even fine for CentOS or Debian with a highly optimized system (un-needed packages removed, mail spool replaced with ssmtp, sshd replaced with dropbear etc)

    (Edited to fix formatting)

    Thanked by 1vimalware
  • sudosudo Member
    edited July 2017

    @scaveney

    "...but actually harmful to performance". thats true! Agreed. For sure, I would NOT reboot or drop caches either. My point in initial reply to the "reboot" suggestion was just in showing how extreme rebooting was. But yes "echo 3" is not only harmful for performance (as rebooting would be) but also comes with some risks.

    For a web server this would be perfectly safe:
    sync; echo 1 > /proc/sys/vm/drop_caches
    ...but again not recommended, as its only treating the symtom as it were and does not address my OP.

    For the last line: " It should be treated as "free memory + cached memory" since all that space is available for use at any time, immediately." - Also true but ...at the price of performance. Also, as per OP what about when server "experience heavy swapping". If the server has enough memory to use at anytime, then swapping would not be an issue.

    Good discussion.

  • WSSWSS Member

    @ewrek said:
    Just means that the software that used it all is shit.

    I've discovered that this is generally the case.

    Thanked by 2ewrek scaveney
  • @sudo said:

    What have you done so far to address this? Obviously we can just add more RAM but that's no fun!

    Block all traffic.

    Thanked by 1scaveney
  • @sudo said:
    @scaveney

    "...but actually harmful to performance". thats true! Agreed. For sure, I would NOT reboot or drop caches either. My point in initial reply to the "reboot" suggestion was just in showing how extreme rebooting was. But yes "echo 3" is not only harmful for performance (as rebooting would be) but also comes with some risks.

    Good to know, it wasn't clear from the context. Many people do legitimately suggest to do such things to "optimize the machine" - I have even seen people suggest putting it in a once-per-minute cronjob. Horrible.

    For a web server this would be perfectly safe:
    sync; echo 1 > /proc/sys/vm/drop_caches
    ...but again not recommended, as its only treating the symtom as it were and does not address my OP.

    The page cache is only a tiny portion of the cache usage, so it wouldn't do much for available resources. Unless the httpd is preallocating all memory and doing memory management on its own (read: a bad idea) it will likely need to allocate some memory on a request. Keeping the page cache intact means that these frequent malloc/free combos (per request) have little impact compared to a handful of additional context switches on each HTTP request.

    For the last line: " It should be treated as "free memory + cached memory" since all that space is available for use at any time, immediately." - Also true but ...at the price of performance. Also, as per OP what about when server "experience heavy swapping". If the server has enough memory to use at anytime, then swapping would not be an issue.

    I firmly believe modern server deployments should NOT have swap. I do not add swap to any of my equipment, because if the machine is reaching that point -

    1) I don't overcommit memory on my nodes. If available memory (including cache) becomes exhausted, something is out of control. Whatever process is broken and uncontrollably consuming memory is unlikely to just stop at the physical memory limit and would begin to hit swap. Hitting swap leads to my second point -

    2) I want the OOM killer to run. Critical/sensitive services should set their OOM killer priority as necessary (as sshd does, for example). I would prefer my monitoring to alert me of service/process failure rather than alert me that a host has gone unresponsive altogether. The latter applies to the third point -

    3) On spindle storage at least, having active pages in swap is essentially a death sentence. You may be able to recover the machine, but odds are slim. Solid state storage improves this situation somewhat, but DDR3 on a Sandy Bridge era CPU/memory controller (using as a baseline as I deploy a lot of Sandy Bridge) is still an order of magnitude faster than even the market's fastest NVMe drive. A Samsung 960 as swap is comparable to a quad-pumped Pentium 4 - does that sound like good performance to you? ;)

    I also fear for the quality of a host using NVMe for swap... ("hey look guys, our new KVM nodes have 512 GB RAM!" )

    Good discussion.

    To you as well - these are the sort of discussions that we need more of here, especially with ***board being dead.

  • akhfaakhfa Member
    edited July 2017

    @scaveney said:

    @akhfa said:
    Reboot

    Sorry, this isn't Windows 98. You don't have memory leaks in your kernel. If your applications are leaking memory, you can restart the application (even though you should fix it). Reboot is never an option.

    C'mon guys, I was joking :D (at least when I wrote my answer)

    Actually if you just have 1 server as a webserver, yes reboot is never an option (last option to be honest, if your server hang).

    But when you have 2 server serving same website with HA design, reboot each of them sequentially (keep 1 server up when the others reboot) can be quick solution before you can add the third server to this cluster.

    Of course if you just have 1 server, the safe practice is monitor you RAM usage, and add additional RAM before it use SWAP, because SWAP will decrease server performance a lot.

    If you decide to use swap to minimize budget, prepare for degraded server performance. For me, swap is just "memory spare" before I can increase my RAM size.

  • edgecaseedgecase Member
    edited July 2017
    1. Run nginx in front of apache.
    2. Block the crawlers from useless sites that just resell the data with your nginx config: https://hastebin.com/ecokozitus.tex
    3. Limit the remaining search engines to only things you really want crawled with robots.txt
    4. Eliminate unwanted hotlinking with some .htaccess rules

    Cut my RAM usage in half.

  • WSSWSS Member

    Well one might just say that example is an.. edge case.

Sign In or Register to comment.