Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with OpenID
Advertise on LowEndTalk.com

In this Discussion

Intel Scalable CPUs
New on LowEndTalk? Please read our 'Community Rules' by clicking on it in the right menu!

Intel Scalable CPUs

randvegetarandvegeta Member, Provider

Anyone have any Scalable CPUs yet? They are brand-new so I gather they are probably quite rare, but they look rather interesting compared to current E5s.

Trying to determine which CPU offers the best value for money for a high density hypervisors.

The Intel Xeon Platinum 8176 Processor looks pretty sweet, with 28 cores, 56 thread, 3.7Ghz Turbo and 2.1Ghz base clock speed. But at almost $9k, that's not chump change.

2x Intel Xeon Gold 6132 Processor would seem to be better value. Only 14 cores, but also 3.7Ghz Turbo but with 2.6Ghz base. Get 2 of these for about $4.3k and it cost less than half the Platinum 8176.

Would like to seem some benchmarks on these babies.

Comments

  • I don't really care about my own CPU's I just get CPU's that use low power (my PC, ... runs 24/7)(I don't have that much of a CPU knowledge) So the Xeon Gold 6132 uses 280W and the Xeon Platinum 8176 uses 165 W, the cache is also higher on the Xeon Platinum 8176.

  • @William is very knowledgeable with this kind of stuff if I recall.

  • WilliamWilliam Member, Provider

    randvegeta said: Trying to determine which CPU offers the best value for money for a high density hypervisors.

    Wait for wide scale EPYC roll-out.

    AMD beats Intels pricing and the new interconnects each uses (thus Infinity fabric and UPI) have similar issues (RAM to core links and so on), AMD gives more PCIe lanes and generally has the better server oriented chipset (which, for less complexity is now SoC while on Intel you get now again chipset and iirc now also external VRM which has advantages and issues, you see that on Skylake-X).

    randvegeta said: Trying to determine which CPU offers the best value for money for a high density hypervisors.

    What you plan - dual CPU - is not considered high density with this CPU generation anymore (it was barely before either) considering they by default already do 8S (Intel) and 4S (AMD) scalability with UPI/IF not having much limits.

    This servers really make not much sense at this time, especially not with for now some large suppliers not actually having hardware available (no ZTE/Huawei, IIRC no public IBM, Dell/HP hardly in stock).

    You also likely want to wait for performance on VT reports and newer kernels that can take advantage (or rather fix the issues) of UPI/IF and how/what memory is allocated to each core.

    Thanked by 1telephone
  • ClouviderClouvider Member, Provider

    Epyc TDP vs Intel E5 is not that pretty though.

    Clouvider Leading UK Cloud Hosting solution provider || UK Dedicated Servers Sale || Tasty KVM Slices || Latest LET Offer

    Web hosting in Cloud | SSD & SAS True Cloud VPS on OnApp | Private Cloud | Dedicated Servers | Colocation | Managed Services

  • xyzxyz Member

    Clouvider said: Epyc TDP vs Intel E5 is not that pretty though.

    Recommend looking at actual power consumption figures, as TDP seems to be a rather poor indicator of it these days...

    AMD have much better value chips than Intel at the moment. Would generally recommend them unless you have some special reason to need Intel.

  • WilliamWilliam Member, Provider
    edited September 13

    Clouvider said: Epyc TDP vs Intel E5 is not that pretty though.

    Somewhat true, yes, but traditionally Intel always won in single core performance and overall TDP - AMDs thing was scalability (especially to 4S and 8S in opteron), memory channels (eg. Opteron 4S already offers a lot of channels and does not require special CPUs as E7s memory buffer does, and 8S is yet another world in Intel while AMD is nearly the same) and ECC in desktop by having always very similar chips to the server side.

    Recently AMD also has a strong iGPU sector by, well, AMD GPUs (eg. look at socket AM1, low TDP but the R7 GPU is pretty good) that might be still expanding desktop and certainly is mobile both on x64 (laptop/tablet) and as Adreno on ARM based designs (phone/tablet).

    Here, at least for me, the price difference to comparable Intel Xeon is already a strong indicator towards AMDs (while Opteron did not really have this at all by the major IPC/TDP differences to anything that can be compared by Intel) on low end.

    On the high end we need to see 4S/8S EPYC still but by the insane increase you pay over either getting 2 EPYC systems with more multicore performance or 2 older/different (E3) Intel systems with more single core (if you REALLY need the Ghz like high speed financial market actions without OC) i don't see the market.

    As before (DL580, DL980 4 and 8S), and as we now see on the multi CPU designs from SM boards, Intel has the better failover/mirror cpu and other options required for the big market (mirror all CPU actions N way, mirror memory N way, mirror PCIe N way - this is needed now due to the NVMe rollout - as new one, where N == S-1, thus at 8S scalability CPUs up to 7) and the ongoing contracts secure this outside of the "edge" cases (X5698, speed trading, certain oil and weather things, AWS, Google) anyway. Noted contracts are also why you were able to see them in use by AWS/Google very early.


    Comparing UPI and Infinity Fabric needs also the note of Intels OmniPath which in certain configurations can be used to improve CPU interconnect and bypass IFs issues with DIE/Memory allocation by a direct link at higher speed than IF has, but with wayyy more work required (no time to read all the docs, but this is way above what IF does and more designed to interface processing things like Infiniband did, not interconnect CPUs, even if it can improve there on top of UPI).

    UPI is just QPI with minor changes anyway, can read up on that up to the last E5 (starting at 1366) to see how it works, and why it is better than what Opteron had.

    AMD now opts to basically use PCIe lanes for CPU interconnect and still move towards a SoC design for chipset things, partly this is what Intel does/did (not like the DMI3 on a Skylake+ CPU is anything else than 'custom' PCIe 3.0 and this 24 lanes from the Z270 are just uplinked by ~4-6 depending on usage thus) anyway but it has latency issues if you need to cross multiple paths to access eg. a specific memory page.

    https://www.nextplatform.com/2017/07/12/heart-amds-epyc-comeback-infinity-fabric/

  • randvegetarandvegeta Member, Provider
    edited September 13

    There is a lot of info to digest.... Looks like more research is needed (For me! Clearly I am not anywhere near as well informed as some of you)

  • bsdguybsdguy Member
    edited September 13

    I largely agree with @William .I have slightly different foci and weightings but he lays it out quite well.

    And I also agree with him on intel/amd. After all, most of us talk in a business context (as opposed to "I want the fastest everything" gaming/whatever machine). So in all but very few cases it comes down to performance/scenario power/density, node per rack - vs - cost.

    Practically speaking, there are better screws to turn than simply "get the newest/fastest/whatever cpu". RAM is probably the best example; I have gotten far more performance gains by simply adding RAM than by changing for a faster processor/newer generation in plenty cases.

    Finally, as unpleasant as that may seem, there is no "fastest" or cheapest being at that system. It's always about your scenario and priorities and budget frame - which means that any really smart decision can't but be the result of some analysis.

    As for intels newest series: I don't care a fuck. William is right; amd epyc, generally speaking, is pretty much always the better bang for the buck choice. 24 a cores vs 28 i cores? Wrong question.

    My favourite prime number is 42. - preferred payment: vague promises of rich great-grand-children supported by a mod.

Sign In or Register to comment.