New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Processor Temperature in DataCenter Environment
Mahfuz_SS_EHL
Host Rep, Veteran
Hello,
I need to know what should be the optimal temperature for a Processor in Datacenter environment ? I know the TDP, yet to know your real-life experience. At No Use, what should be the temp & at full load, what it should be.
Processor such as E-2124/E-2226G/E-2236 in Intel lineup & Ryzen 3600/3700X/3900X in AMD lineup.
In case if anyone don't know how to get temperature state, if you don't have IPMI, you can install lm_sensors (CentOS Package) and run "sensors" to get temp. You can also run "sensors-detect" to detect the sensors.
Regards.
Comments
Sorry for tagging you people, don't mind
@seriesn @Francisco @HostSlick @hosthatch @VirMach @georgedatacenter @Hassan @dustinc @NDTN @ExonHost @key900 & All Other My Experienced Brothers.
Idle temp will depend on CPU frequency scaling and power saving c-states.
Most of my Intel dedis are between 30 and 60 degrees C.
Thanks for your information. May I know at what load it reaches 60 degrees & which Processor Model ?
We're seeing around 45'C idle on a 9 3900x, and about 70'C under full load.
I ran cpuburn on some of my idlers for you.
Xeon W-2145 sits between 40 and 50 degrees with ~40% load and doesn't really go over 60 at max load.
Xeon E5-1630 v4 doesn't go over 42 at max load.
Xeon E5-1620 v3 idles at 40, doesn't go over 45 at max load.
Xeon E3-1240 v2 idles at 35, doesn't go over 61 at max load.
i7-4790K idles 30, doesn't go over 50 at max load.
i5-3570S idles at 35, doesn't go over 52 at max load.
i7-2600K idles at 35, doesn't go over 65 at max load.
If you're evaluating temperatures, remember that some kinds of CPU burn tests make use of instructions that push temps to the extreme, but it isn't realistic since no normal software will do that.
As we are using Intel, we found 40-60'C
My Intel Servers in SBG2 peaked at 100+ degress C before getting its power cut.
Same here. Intel E-2136 with 6.7 GHz of usage at 58’C.
If you are in OVH SBG2...
Are toy talking socket Temps or core Temps? Big difference.
Those listed temps seem really low... Dedis I have are around 90°c+ under full load.. idk how you can have 40-50 under load
E-2236 at OVH RBX/GRA goes to 80-95 peak
I constantly pulled 100% Load on Intel & AMD too. Intel went 95 Degree+ AMD was 85-90 Degree. I don't know why My Experience isn't matching with others. Either, My setup has problem or others didn't pull 100% (though it's unrealistic to pull 100% all the time).
42 at max load in U? That's not possible even in ATX with water.
That's a dedi at OVH, so it's got good water cooling. Looking at recorded temps, it actually peaked at 46 at some point, but it didn't go over 42 while running cpuburn for a while yesterday.
Of course peak temps are load-dependent. Maxing out the cores with a simple cpuburn won't heat them up as much as densely packed AVX instructions would.
Just run Prime95 on it and I can bet you'll see a whole different ballgame with those temps.
Prime95 is an example of an unrealistic benchmark which pushes the hardware to the extreme. It's a worst case scenario which is designed to reach temps that wouldn't happen under any normal kind of workload. It can be useful to stress test hardware, but it's not a good way of measuring temperatures that would be expected at high load.
High load /= max load. I agree prime isn't the perfect benchmark but then again there are none that would measure server loads. Most of them just stress the CPU and prime is the best at that.
When I say "high load", I really mean max load without being hopelessly overloaded. Overloading the hardware would actually decrease the throughput and temps in most cases, since the CPU will spend more time on things like IO and context switching.
Modern CPUs are complex beasts with many different kinds of instructions which utilize specialized pieces of hardware on the chip. Maxing out different pieces of hardware results in different temperatures, and the utilized hardware depends on exactly how the software is implemented, which compiler was used, which optimizations are enabled, and which system it was compiled on.
Prime95 doesn't just max out the FPUs, but will also utilize AVX if supported by the chip, allowing it to do many FPU calculations per instruction. Most workloads don't even max out the FPUs, and most real-world software doesn't even make use of AVX. Disabling AVX before running Prime95 would provide a more realistic benchmark for stability and more reasonable temps, but it still only simulates a very specific kind of workload, which will be very different from almost any real-world use case.
Run x264 encoding for an hour then. Very realistic load and you'll get temps similar to prime95
I own a server with Xeon E5-1620 v3 and it usually for me Idles at 35C with a max of 42C.
Or even better, run aomenc. x264 too easy.
Our intels runs around 40 degrees under load