New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Got alerts of our Dallas2 hitting load of 28 (2x E5-2620) and it brushed it off like it was nothing. Never sustained anything above 4 though.
While not a VPS, we used to routinely have loads of 800 during peak business hours due to mismanaged cPanel, Apache, and MySQL instances. Getting rid of cPanel, moving to an Nginx reverse proxy with Apache, adding PHP APC, and optimizing MySQL settings and we rarely ever move beyond a load of 4.
The CPU usage shows 20-25% of use (24 threads)
The load is around 3.
Is this normal?
Sounds fairly consistent with how my nodes run.
I had a free shared host's server with 10k accounts hit loads of 500 because of 3 or 4 processes which I reniced to 19. Websites were still loading at a reasonable speed for a server with so many accounts.
Thanks @jarland
What load is ok is a function of number of cores.
A load of 32 is no problem for a 32-core box, but not for a dual proc...etc.
A load of 500 and website were still loading... that's amazing.
It depends on what's actually causing the load to be high. For example, if it's high because of disk I/O, then anything that doesn't actually hit the disk will continue to work fine. On the other hand, if it's high because you have over 9000 threads all trying to do work at the same time, then you're gonna have a bad time.
We try to keep our VPS nodes at 2 or lower. Once was had load of 37 on a dual core shared hosting server, but it was like that for 48 hours or so of only slightly decreased performance before we noticed.
I've seen 182 load on a server once. It was working just fine. This was on a 4x16 core AMD server with 128 gb ram.
Our nodes stay at most of the time
You don't have active monitoring?!?!?!
@Spencer now we do. Back then, nope.
i've testing a load about a 600 - 800 at pyramid server. and still running smooth
Last night I had someone using 140MB/s IO not sure if read or writes, cannot remember.
But highest load I have had is 42 on 8 Core servers.
It's not that they're abusers, just that scripts/software can often do funky stuff when left running for a long time or at peak times for DBs.
Where are you getting this load number from? Seems like /proc/loadavg shows CPU utilization as a percentage.
We sustain a load of around 25 on our LA KVM node, at the moment it is at 22.50, the disks still do 300MB/s IO as they are SAS and the CPU is not an issue as it is dual 16 core (32 physical cores)
Uptime, top, htop ?
Due to the way openvz and load averages work... we at one point had a node with a load of 200 which was perfectly usable. Obviously this isn't a every day problem, or even something we encourage.
I've noticed sometimes an openvz containers load will get 'stuck' even though it isn't doing anything, and that will increase the overall node load although doesn't affect performance at all.
Nice necro bump.
Found this on a google search and though I would add some relevant information.... the trolls are real.
I maybe wrong on this but there are two kinds of io wait on a CPU. Blocking and non blocking io. When io blocks it showed up in iowait. When its non blocking then the CPU used is much smaller and under kernel CPU %
665.9, then all hell breaks loose.