New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Is this normal? Can 4 cpu load excactly the same?
This is weird (at least, for me), as I haven;t seen it in any of the boxes I have.
In a vps of mine with 4 cores, the cores are loading exactly the same as you can see to the attachments. The box is OVZ with 1GB memory and access to 4 cores (E3-1240 v3 3.40GHz).!
Snapshots have a time window of some seconds, as you can see.
Comments
some of my high-loaded instance with chicagovps and buyvm have the same behavior.
My CVPS vps's are not having the same behavior. Multiple cpu's do have different load.
I noticed this a few days ago with a MC box; never thought anything of it.
For the record its not hosted at CVPS.
I have a few questions.
Can you check if you also have any or a lot of cpu steal going on. is the responsive of the node normal or slow in any way. what type of application are you running?
@wojons The response of the box is now normal, I think. This is a box tha thad some outages the last couple of weeks (12,5 hours the last week!), I opened a ticket to the provider and I got the response that there was an abuser on the node that caused the trouble ("This is because of other VPS taken high CPU resources on the Host-node" is the answer). Now, vps is rolling good, but the cpu's are still load equally.
And for the record also, it is not hosted at CVPS
@jvnadr do you have any monitoring for this server by any chance something that can some some history on it. also you never said what you were running
@wojons A single website with ispconfig below it. I monitor it with pingdom (only for outages), not of performance. But I have the solus internal monitoring for CPU, HDD and MEM installed
(the load you see at the graph is normal, some minor peaks when optimizing db. The spikes to traffic are the active offline backups. The outages are all of them monitored externally from pingdom and the reason for the ticket I opened, that is now resolver. The server runs nicely but the weird behavior - equally load cpu's is still there and seem not to affect the performance).
so with out more information i can try to expalin this problem in the way i have seen it. If i create a 1core vm, and then put a vm with 4 cores in it. i will end up exactly equal or about equal cpu usage across all cores. i would recommend restarting the server and seeing if it presets. and then take snap shots of /proc/stat which will give the counts of the number of jiffers per core. and if they are about equal and stay with in range then it is probly what i sususpect.
can you give us the 'top' header please when this happens (not htop)
@wojons So, If I can understand well, those are not 4 actual cores I have access to, but virtual cores that are working equally? I have to mention that in /proc/cpuinfo I can see access to 4 physical "genuin intel" cores.
This is the output
`processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 60
model name : Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
stepping : 3
cpu MHz : 3399.997
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx hypervisor lahf_lm arat epb xsaveopt pln pts dts
bogomips : 6799.99
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 60
model name : Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
stepping : 3
cpu MHz : 3399.997
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx hypervisor lahf_lm arat epb xsaveopt pln pts dts
bogomips : 6799.99
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 60
model name : Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
stepping : 3
cpu MHz : 3399.997
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx hypervisor lahf_lm arat epb xsaveopt pln pts dts
bogomips : 6799.99
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 60
model name : Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
stepping : 3
cpu MHz : 3399.997
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx hypervisor lahf_lm arat epb xsaveopt pln pts dts
bogomips : 6799.99
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
`
And this is /proc/stat
`cpu 307998 0 82519 15423519 366334 0 0 203603
cpu0 76999 0 20629 3855879 91583 0 0 50900
cpu1 76999 0 20629 3855879 91583 0 0 50900
cpu2 76999 0 20629 3855879 91583 0 0 50900
cpu3 76999 0 20629 3855879 91583 0 0 50900
intr 0
swap 0 0
ctxt 4724657
btime 1402903188
processes 6821
procs_running 2
procs_blocked 0
`
This is the output (it happens all the time)
so just for the top when ur in top press 1. and it will break down each cpu.
but yes since you have openvz its going to be virtual cores i dont know how the hosting provider has it setup but based on what i see in /proc/stat you may really only have one cpu core its very odd. i doubt that the host will tell you but ask them which scheduler they are using for the machine because someing clearly is wrong if every process you run is running on all 4 cores.
As you can imagine, the results in top (spit the processes) are same as htop (how could they be different?). So, I have doubts how this can be real split load between cores, as in /proc/cpuinfo seem to be. It is odd to have equally loaded cpus to all of the processes.
yeah i am sure everything is the same in the top breakdown was just letting you know. also in a vm other then like a few they will pass down the direct cpu data from the host.
this issue has happened for me with crissic.net. did you find any solution?
+1 what @andrew said
@andrew @FrankZ No, not really. VPS is working well I guess, though. But I still have not any valid answer, some friends of mine had different opinions on if this is an issue, it caused bu a particular setup or it is normal behavior for some openvz installations. Odd thing is that most my other boxes (as I can remember, I have not checked it in deep) does not have this behavior, every cpu states different load.
I recently seen this on one VPS provider. It was announced by hoster as a feature. They provide one (or a few) CPU cores, but visible number of cores are more than real by 4x. They said, that it's for perfomance reasons to parallize processes across cores) I don't think that it's good for perfomance It's on OpenVZ.
In your case, it means, that you have only 1 CPU core, not 4.
Short Answer: Bug introduced in fairly recent OpenVZ kernel releases and now fixed in 92.1. Seems it was introduced when some changes were made to the scheduler.
Long Answer: Seen this behaviour on my VPS after they updated the host node's kernel. (Note the uname text comes from the host but is only updated when the container starts - if the host node is updated the container will still show the old version until it is restarted).
There was a security fix included in this release for containers using simfs so any nodes using ploop didn't need it and might not have been updated.
>
An OpenVZ installation at home using the same kernel version shows the same odd behavior. After updating the host to 2.6.32-042stab092.1 it shows separate CPU usage again in the containers.
From CU-2.6.32-042stab092.1 Parallels Virtuozzo Containers 4.7 Core Update (http://kb.parallels.com/en/122229) one of the bug fixes listed is :
@SkylarM what do you think? is it a kernel issue or other?
This makes the most sense.
I'd say what DBA posted above is accurate. As it's not a mission-critical bug or security update we won't be rebooting nodes quite yet to apply the fixed kernel.
After doing some checking this does seem to be a kernel bug since at least 042stab090.3
My ProxMox 2.6.32-30-pve machines do the same thing.
I see 042stab092.2 is now out and is a security update to fix CVE-2014-4699.