New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Reboot
Is this just a general question or specific to some host or use case?
Not running websites?
Instead of reboot you can do the same with a cron of:
sync; echo 3 > /proc/sys/vm/drop_caches
But both reboot and drop_caches are very extreme. ha
Just means that the software that used it all is shit.
Sorry I could have been a bit more clear.
...Just in general. I've seen a lot of users just upgrade ram right away. I'm curious what users who are more focus on lowend specs would do first in the same situation of a webserver running out of usable memory.
trim the fat. stop trying to run stuff on 128mb KVMs.
swap out apache prefork for anything else.
or Get an overzold type ovz instead and consolidate.
Sorry, this isn't Windows 98. You don't have memory leaks in your kernel. If your applications are leaking memory, you can restart the application (even though you should fix it). Reboot is never an option.
Dropping caches is not only useless but actually harmful to performance. The kernel has three main caches affected by this knob:
pagecache
, keeps pages that are unused but already mapped - when a mapping request is made, it can be filled frompagecache
which reduces allocation overhead. No need to update page tables means less context switching and delay on allocation.dentry
- adentry
is the kernel structure representing an actual file/folder name on disk. It consists of the string holding the name and a pointer to aninode
representing the actual data in the file / metadata of a folderinode
- aninode
is the reference to actual data on the disk, both files and directories.inode
s are unique and can have multipledentry
s referencing them - this is how hard links are implemented (two or more files referencing same blocks on disk)Since these are all very commonly used kernel structures, caching them is a major source of performance in both the
mm
(memory manager) andvfs
(virtual filesystem) subsystems. Dropping these caches means that doing memory allocations or accessing commonly used files & folders will have to drop back to slow paths. Not only will these syscalls be significantly slower but they will immediately recache data that's accessed since the caches are now empty. So, by dropping caches you effectively hindered performance and freed a chunk of memory that was already technically free. When an application needs memory and there is an insufficient amount of "free" memory available to fulfill that request, the allocator begins to free cached items until it can satisfy the request. Since this memory is already mapped you skip the allocator overhead, which is a performance increase.So really, if you count "free memory" as just "free memory" parameter alone, you aren't doing it properly. It should be treated as "free memory + cached memory" since all that space is available for use at any time, immediately.
Most of the time yes, either shit or misconfigured. Most workloads require very little memory... if efficient programming was still practiced and we didn't have a culture of building software using layers upon layers of garbage abstraction, servers with 16GB would still be considered massive.
128 MB KVM is fine for a lot of distros, and even fine for CentOS or Debian with a highly optimized system (un-needed packages removed, mail spool replaced with ssmtp, sshd replaced with dropbear etc)
(Edited to fix formatting)
@scaveney
"...but actually harmful to performance". thats true! Agreed. For sure, I would NOT reboot or drop caches either. My point in initial reply to the "reboot" suggestion was just in showing how extreme rebooting was. But yes "echo 3" is not only harmful for performance (as rebooting would be) but also comes with some risks.
For a web server this would be perfectly safe:
sync; echo 1 > /proc/sys/vm/drop_caches
...but again not recommended, as its only treating the symtom as it were and does not address my OP.
For the last line: " It should be treated as "free memory + cached memory" since all that space is available for use at any time, immediately." - Also true but ...at the price of performance. Also, as per OP what about when server "experience heavy swapping". If the server has enough memory to use at anytime, then swapping would not be an issue.
Good discussion.
I've discovered that this is generally the case.
Block all traffic.
Good to know, it wasn't clear from the context. Many people do legitimately suggest to do such things to "optimize the machine" - I have even seen people suggest putting it in a once-per-minute cronjob. Horrible.
The page cache is only a tiny portion of the cache usage, so it wouldn't do much for available resources. Unless the httpd is preallocating all memory and doing memory management on its own (read: a bad idea) it will likely need to allocate some memory on a request. Keeping the page cache intact means that these frequent malloc/free combos (per request) have little impact compared to a handful of additional context switches on each HTTP request.
I firmly believe modern server deployments should NOT have swap. I do not add swap to any of my equipment, because if the machine is reaching that point -
1) I don't overcommit memory on my nodes. If available memory (including cache) becomes exhausted, something is out of control. Whatever process is broken and uncontrollably consuming memory is unlikely to just stop at the physical memory limit and would begin to hit swap. Hitting swap leads to my second point -
2) I want the OOM killer to run. Critical/sensitive services should set their OOM killer priority as necessary (as sshd does, for example). I would prefer my monitoring to alert me of service/process failure rather than alert me that a host has gone unresponsive altogether. The latter applies to the third point -
3) On spindle storage at least, having active pages in swap is essentially a death sentence. You may be able to recover the machine, but odds are slim. Solid state storage improves this situation somewhat, but DDR3 on a Sandy Bridge era CPU/memory controller (using as a baseline as I deploy a lot of Sandy Bridge) is still an order of magnitude faster than even the market's fastest NVMe drive. A Samsung 960 as swap is comparable to a quad-pumped Pentium 4 - does that sound like good performance to you?
I also fear for the quality of a host using NVMe for swap... ("hey look guys, our new KVM nodes have 512 GB RAM!" )
To you as well - these are the sort of discussions that we need more of here, especially with ***board being dead.
C'mon guys, I was joking (at least when I wrote my answer)
Actually if you just have 1 server as a webserver, yes reboot is never an option (last option to be honest, if your server hang).
But when you have 2 server serving same website with HA design, reboot each of them sequentially (keep 1 server up when the others reboot) can be quick solution before you can add the third server to this cluster.
Of course if you just have 1 server, the safe practice is monitor you RAM usage, and add additional RAM before it use SWAP, because SWAP will decrease server performance a lot.
If you decide to use swap to minimize budget, prepare for degraded server performance. For me, swap is just "memory spare" before I can increase my RAM size.
Cut my RAM usage in half.
Well one might just say that example is an.. edge case.