All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Pretty incredible load on a Quickweb VPS...
And it is not running Unix Bench whatsoever, just a Wordpress installation with Nginx.
idk, I would expect something like this to happen on a really oversold VPS provider, but not on Quickweb Xen VPS.
uptime
09:06:02 up 21:05, 1 user, load average: 21.75, 20.84, 14.76
uptime
09:08:20 up 21:07, 1 user, load average: 21.39, 20.75, 15.53
uptime
09:11:20 up 21:10, 1 user, load average: 21.53, 21.08, 16.57
cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
stepping : 2
cpu MHz : 2400.130
cache size : 12288 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu de tsc msr pae cx8 sep cmov pat clflush mmx fxsr sse sse2 ss ht nx constant_tsc aperfmperf pni ssse3 sse4_1 sse4_2 popcnt hypervisor ida arat
bogomips : 4800.26
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
stepping : 2
cpu MHz : 2400.130
cache size : 12288 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu de tsc msr pae cx8 sep cmov pat clflush mmx fxsr sse sse2 ss ht nx constant_tsc aperfmperf pni ssse3 sse4_1 sse4_2 popcnt hypervisor ida arat
bogomips : 4800.26
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
`
Comments
http://www.96mb.com/
This website is offline
No cached version is available
We can see the load if other person abusing the processor?
If it's a Xen VPS then there's a good chance the abuse is IO related since Xen does a pretty good job balancing CPU load (it does a good job balancing IO load also but at those levels Disk IO tends to be the culprit unless somebody is running a fork bomb or something).
The problem is, this has been going around for quite a while, every time they have promised me I won't see it again.
However, they at least still reply my tickets, which is a good sign
I doubt any one could still get Nginx to run when the load is this high, let alone MySQL.
There is a provider here that has that kind of load almost constantly. I have moved a few clients and every time the nodes are like 25.
@96mb you still use one of these? I cancelled it long ago
http://www.asim.pk/my-virtual-private-servers-vps/
I do admit the support is fast; even if sometimes you feel that they did not read your ticket before replying a canned or a totally unrelated reply (the staff members, of course)
@96mb it is possible they have lost a drive and are resyncing the raid at the same time as heavy disk use by other VM's
I have also seen this happen on XEN PV if a few people try to run e.g. centos 6 64bit or similar with 128mb ram and the VPS just hangs during boot and for some reason eats 5k+ Disk I/O requests every few seconds.
If they're thrashing the disk, for example, and then you try to do the same (not uncommon while benchmarking, really) then your iowait will be high, and so will your load. Load isn't entirely to do with CPU usage.
Sounds like its time to switch providers!
Well it could've been just after midnight for a few of the containers, doing automated backups or whatever. A few of those going and hammering hdd and cpu can drive up the load. I wouldn't say that a one-off load of 20 is anything to worry about. If it happens regularly, maybe.
What happens during benchmarking and during normal use of a VPS are very different usage scenarios.
the load just like my old vps on infinitie. fresh install load above 20, asking support and they said something wrong with my script
few minute later I cancel my vps and moving
They probably tell you it won't happen again because they remove the abuser only to find someone else abusing it later?
@96mb is the issue resolved?
Yes, kernel panic, same as the past few times...and again, I think I was promised something along the line that nothing like this will happen again.
This actually reminded me of one of the LEA's post, I think he had the same issue with LEB as well when it was hosted on Quickweb.