Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


AMD EPYC "Milan" CPUs
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

AMD EPYC "Milan" CPUs

What are everyone thoughts on the up coming AMD Milan CPUs? (Shipping this year)

What type of an impact do you think it will have? VM suppliers, Game Hosting (High clock speed CPU), web hosting, etc.

For someone what of a line we should be able to draw the current EPYC and 3000 series to the 5000 series CPUs.

Comments

  • @stevmc said:
    (Shipping this year)

    Nope: https://www.heise.de/news/AMD-Zen-3-Vorstellung-neuer-Epyc-Prozessoren-Anfang-2021-4962007.html

    Keep in mind that in multi-thread scenarios 5000 series aint a big upgrade over 3000 series (around 11% in best case scenario).

  • jsgjsg Member, Resident Benchmarker

    On youtube they are and will be hyped, mainly by those "reviewers" who address the gamer crowd.
    In real life it still isn't that simple to find brand name hardware with good and full support (e.g. 2 sockets for Epyc) for servers, notebooks, and tiny embedded SOCs for Zen 2.

    If you are a gamer, order the fastest one right now! If you are a normal reasonable person or even an engineer, ask again in a year (till the you'll be very well served by Zen 2).

  • If we're ordering new equipment, we'll obviously go for Zen3 over Zen2. We won't replace Zen2 CPUs in servers. It's useless.

    But any day when Zen3 is available, there's no point in Zen2 for new equipment.

    Thanked by 1vimalware
  • jsgjsg Member, Resident Benchmarker

    @Zerpy said:
    If we're ordering new equipment, we'll obviously go for Zen3 over Zen2. We won't replace Zen2 CPUs in servers. It's useless.

    But any day when Zen3 is available, there's no point in Zen2 for new equipment.

    And that's somewhat sad because low-end (hosting) largely translates to 2nd hand machines, so we need providers and companies to replace Zen2 machines by Zen3 ones to create a supply of (relatively) cheap 2nd. hand Zen2 servers.

  • @jsg said:
    And that's somewhat sad because low-end (hosting) largely translates to 2nd hand machines, so we need providers and companies to replace Zen2 machines by Zen3 ones to create a supply of (relatively) cheap 2nd. hand Zen2 servers.

    That sounds like their problem, not us that buys new equipment when we need it.

  • ZerpyZerpy Member
    edited November 2020

    @jsg said:
    In real life it still isn't that simple to find brand name hardware with good and full support (e.g. 2 sockets for Epyc) for servers, notebooks, and tiny embedded SOCs for Zen 2.

    You mean Dell, HP, Supermicro, Lenovo/IBM and Gigabyte are not brand names?

  • jsgjsg Member, Resident Benchmarker
    edited November 2020

    @Zerpy said:

    @jsg said:
    In real life it still isn't that simple to find brand name hardware with good and full support (e.g. 2 sockets for Epyc) for servers, notebooks, and tiny embedded SOCs for Zen 2.

    You mean Dell, HP, Supermicro, Lenovo/IBM and Gigabyte are not brand names?

    Apologies, I worded it sloppily. What I meant was good quality brand servers like Cisco, Dell, Fujitsu, HP (to a degree), Lenovo and what I meant was relevant servers which IMO to a considerable degree translates to blade systems, as well as embedded systems.

    I'll start with the latter and from what I see there still is a somewhat limited choice of Ryzen 1000 embedded systems and the 2000 systems are just coming up (more announcements than purchasable hardware as of now).

    Re. the servers, no Cisco B blade at all, no Dell blades, no Fujitsu blades, no HP blades. Supermicro, which I consider 2nd class, has at least 3 blades, none of which offers 2 sockets. From what I see there are but a few quality brand 1HU and 2HU servers not all of which offer 2 sockets, plus IMO 1 HU dual socket systems have the major drawback if limited life time and energy gobbling wasteful tiny ventilators.

    Re notebooks, the best I know of are Ryzen 4000 (Zen 2) based systems.

    TL;DR no 2 socket blades at all and only a quite limited choice of 2HU dual socket servers.

  • @jsg said:
    Apologies, I worded it sloppily. What I meant was good quality brand servers like Cisco, Dell, Fujitsu, HP (to a degree), Lenovo and what I meant was relevant servers which IMO to a considerable degree translates to blade systems, as well as embedded systems.

    blade systems are not really relevant, not in 2015.

    Re. the servers, no Cisco B blade at all, no Dell blades, no Fujitsu blades, no HP blades. Supermicro, which I consider 2nd class, has at least 3 blades, none of which offers 2 sockets. From what I see there are but a few quality brand 1HU and 2HU servers not all of which offer 2 sockets, plus IMO 1 HU dual socket systems have the major drawback if limited life time and energy gobbling wasteful tiny ventilators.

    Blade servers are generally waste of space, because of cooling restrictions. 1U and 2U servers are plenty better in most environments.

    TL;DR no 2 socket blades at all and only a quite limited choice of 2HU dual socket servers.

    Exactly, no one gives a shit about blades in 2015 - but I guess that's where your logic disappears.

    You can build so dense systems with dual socket 1U servers with AMD, that you're perfectly fine.

  • jsgjsg Member, Resident Benchmarker
    edited November 2020

    @Zerpy said:
    blade systems are not really relevant, not in 2015.

    ?? 2015 ??

    Blade servers are generally waste of space, because of cooling restrictions. 1U and 2U servers are plenty better in most environments.

    A rather unique view ...
    And btw. space isn't the major criterion and certainly not with dual socket Epycs (hint: there's only so much power in a rack, often < 10kW and even down to 6 kW).

    Exactly, no one gives a shit about blades in 2015 - but I guess that's where your logic disappears.

    (leaving aside the nonsensical "2015") ...
    Good luck wiring, switching, managing hundreds of non-blade servers. Large scale hosting sooner or later boils down to either custom hardware or blades.

    You can build so dense systems with dual socket 1U servers with AMD, that you're perfectly fine.

    Assuming a 10 kW rack and 1HU Epyc servers even with just 500W per server the density will be very poor.

  • ZerpyZerpy Member
    edited November 2020

    @jsg said:

    @Zerpy said:
    blade systems are not really relevant, not in 2015.

    ?? 2015 ??

    Exactly, you don't get it.

    Blade servers are generally waste of space, because of cooling restrictions. 1U and 2U servers are plenty better in most environments.

    A rather unique view ...

    No.

    And btw. space isn't the major criterion and certainly not with dual socket Epycs (hint: there's only so much power in a rack, often < 10kW and even down to 6 kW).

    Let's say 10kW, you have 42U - that's 238 watt per U - you can greatly exceed that with 1U servers.

    Thus using a blade, you'll waste space that you have to pay for anyway.

    Therefore space is indeed not the major concern, but power and cooling - so why use blades when you're not benefitting it by their density.

    Good luck wiring, switching, managing hundreds of non-blade servers. Large scale hosting sooner or later boils down to either custom hardware or blades.

    Having worked for multiple large scale hosting providers - this has not been a problem. Large scale is 40k+ physical servers - not really a problem :) Clearly you seem to believe so. Maybe lack of experience.

    Edit: Surely blades look cleaner, but it's not a problem.

    Assuming a 10 kW rack and 1HU Epyc servers even with just 500W per server the density will be very poor.

    No.

    Thanked by 1TimboJones
  • jsgjsg Member, Resident Benchmarker
    edited November 2020

    @Zerpy said:

    Blade servers are generally waste of space, because of cooling restrictions. 1U and 2U servers are plenty better in most environments.

    A rather unique view ...

    No.

    And btw. space isn't the major criterion and certainly not with dual socket Epycs (hint: there's only so much power in a rack, often < 10kW and even down to 6 kW).

    Let's say 10kW, you have 42U - that's 238 watt per U - you can greatly exceed that with 1U servers.

    No. For one you'll usually loose 1 or 2 HU to switches. Plus and more importantly: good luck running dual socker Epyc servers on 238W (you were about density, right).

    Good luck wiring, switching, managing hundreds of non-blade servers. Large scale hosting sooner or later boils down to either custom hardware or blades.

    Having worked for multiple large scale hosting providers - this has not been a problem. Large scale is 40k+ physical servers - not really a problem :) Clearly you seem to believe so. Maybe lack of experience.

    Nice try to attempt playing the "I know what large scale really means" game, but a rack is a rack no matter whether you have a couple or tens of thousands.
    Also note that I also mentioned switching, system and network management and maintenance.

    Edit: Surely blades look cleaner, but it's not a problem.

    The second time you say "no problem". Kindly note that I didn't say that using standard servers vs. blades is a problem.

    Assuming a 10 kW rack and 1HU Epyc servers even with just 500W per server the density will be very poor.

    No.

    Cool! So tell me your secret! How do you manage to run (even just) 40 500W servers with 10 kW?
    Because in the universe I live in the laws of physics and mathematics are valid and binding and 40 x 500W = 20 kW.
    Btw, you do know that, especially in 1 HU systems, cooling eats a quite significant part of a servers el. power (up to > 10%), right?

  • @jsg said:
    No. For one you'll usually loose 1 or 2 HU to switches. Plus and more importantly: good luck running dual socker Epyc servers on 238W (you were about density, right).

    10000 / 42 == 238 watt. Split it as you wish. Switches also uses power.

    And exactly, you exceed the power for a dual epyc, thus your density on compute is higher than your power allows.

    Thus high density.

    Nice try to try the "I know what large scale really means" game, but a rack is a rack no matter whether you have a couple or tens of thousands.

    You cable things wrong if it's a mess - again a you problem.

    Also note that I also mentioned switching, system and network management and maintenance.

    Not a problem in normal DCs.

    The second time you say "no problem". Kindly note that I didn't say that using standard servers vs. blades is a problem.

    You basically did.

    Cool! So tell me your secret! How do you manage to run (even just) 40 500W servers with 10 kW?

    You don't. Thus your density is high. Blades doesn't make it any better. That's the whole point.

    Btw, you do know that, especially in 1 HU systems, cooling eats a quite significant part of a servers el. power (up to > 10%), right?

    Yes. Again - blades doesn't fix that problem - you'll likely have the same or more empty rack space

    Thanked by 1TimboJones
  • jsgjsg Member, Resident Benchmarker

    @Zerpy said:

    @jsg said:
    No. For one you'll usually loose 1 or 2 HU to switches. Plus and more importantly: good luck running dual socker Epyc servers on 238W (you were about density, right).

    10000 / 42 == 238 watt. Split it as you wish. Switches also uses power.

    And exactly, you exceed the power for a dual epyc, thus your density on compute is higher than your power allows.

    Thus high density.

    What a pile of BS. For a start space is one of the cheaper factors in colo costs once we're talking about full racks or, to a slightly lesser degree half racks and "wasting space" is not of high concern.

    Plus, usually there is a use case and hence a priority like "as much performance as possible (per rack)" or "most bang for the buck", etc. Going Epyc strongly suggests that performance is of high concern, possibly hand in hand with "reasonable" el. power consumption - and whatever the priorities are all within the power constraints of a rack which more often than not is on the lower end (read e.g. 6 kW) and rather rarely above 10 kW, maybe 12 kW.

    Nice try to try the "I know what large scale really means" game, but a rack is a rack no matter whether you have a couple or tens of thousands.

    You cable things wrong if it's a mess - again a you problem.

    Again you pull something out of your nose that wasn't in question.

    Also note that I also mentioned switching, system and network management and maintenance.

    Not a problem in normal DCs.

    BS! Management and maintenance are largely human resource bound and hence a significant cost factor. And btw, you are jumping between a DC and a rack perspective which are very different wrt to maintenance and management.

    The second time you say "no problem". Kindly note that I didn't say that using standard servers vs. blades is a problem.

    You basically did.

    So, you know better than me? I don't think so. I think that your whole position is incoherent, jumping between rather different positions (e.g. DC vs colo'd rack), pulling things out of your nose, and simply asserting based it seems on the _ wrong - assumption that I take your assertions at face value and as being based on oh so much experience.

    Cool! So tell me your secret! How do you manage to run (even just) 40 500W servers with 10 kW?

    You don't. Thus your density is high. Blades doesn't make it any better. That's the whole point.

    Btw, you do know that, especially in 1 HU systems, cooling eats a quite significant part of a servers el. power (up to > 10%), right?

    Yes. Again - blades doesn't fix that problem - you'll likely have the same or more empty rack space

    BS again! For one you don't get to define "high density" just the way you wish, plus it makes a significant difference whether one cools 10 or 14 systems with a few e.g. 120 mm fans or with a lot of tiny 1 HU fans. Plus, if space is of concern one can fit more performance into 2 socket systems than into 1 socket systems.

  • @stevmc said: What are everyone thoughts on the up coming AMD Milan CPUs? (Shipping this year)

    Exciting times ahead especially when these AMD EPYC Milan cpus age and become the next E5620 Westmere like cheap bargin cpus which pack a huge amount of performance per price ratio when that time comes :)

  • ZerpyZerpy Member
    edited November 2020

    @jsg said:
    What a pile of BS. For a start space is one of the cheaper factors in colo costs once we're talking about full racks or, to a slightly lesser degree half racks and "wasting space" is not of high concern.

    You seem like a dense person. Let me explain it so your brain can compute it.

    You lease space in a datacenter, this can be half a rack, 1 rack, or more - each rack is designed for X load in terms of power and cooling capacity.

    You can get high density racks which obviously allows for higher power (and cooling) capacity in the same amount of physical space.

    You're limited by mainly two things here:

    • Physical space
    • Power and cooling capacity of the rack

    When you design your infrastructure, you can design it based on the above two factors.

    When you hit one of the limits, your rack is full.

    If you hit the power and cooling capacity, then it doesn't matter whether you have 5 units or 20 units "free", you've reached the capacity of the rack, so you need to buy another rack if you want to expand any further.

    If we assume 10kW is the limit of a 42U rack, then the limit of a 21U (half) rack, will be basically half. It's kinda obvious.

    With your 10kW limit, you can then select hardware that fits in this 42U rack.

    You can either go for blade servers (Let's say 16 nodes per 10U) - this gives you 32 CPUs on 10U of space, that's an amazing density, because your density is effectively 3.2 CPUs per 1U.

    You can also opt for 16x 1U servers which will give you the same number of CPUs, but your density is 2 CPUs per 1U.

    You have a lower density by going for 1U servers, if we look purely at the utilized space for servers.

    However, since you have to stay within 10kW, you're not able to fill up a full rack with either a blade system or 1U servers - you'll reach the 10kW prior to having a full rack of equipment.

    Your density of utilized units are higher with blade servers than with 1U servers, but since you're hitting the power limit prior to the space limit (42U), your actual compute density per rack will be the same thing, regardless of what option you take.

    That shouldn't be so hard to understand.

    Thanked by 1TimboJones
  • ZerpyZerpy Member
    edited November 2020

    Plus, usually there is a use case and hence a priority like "as much performance as possible (per rack)" or "most bang for the buck", etc. Going Epyc strongly suggests that performance is of high concern, possibly hand in hand with "reasonable" el. power consumption - and whatever the priorities are all within the power constraints of a rack which more often than not is on the lower end (read e.g. 6 kW) and rather rarely above 10 kW, maybe 12 kW.

    Sure, selecting between blades or 1U servers are not gonna change the power constrains, and as you said youself - space is cheap, so who cares if we use 20U in blades, or 32U in space (not that I think we could effectively reach that in either case with a dual EPYC systems anyway).

    So the whole point of blade servers are useless. Blade servers was made mainly for density purposes - we're at a point where density by default is so high, that you're gonna saturate your rack regardless. Blade servers may be cleaner to manage, but also comes with other issues such as if your chassis fails, you're basically screwed. We've seen this plenty of times.

    It's generally nicer to have a single 1U server going offline, than seeing your 10U blade chassis deciding to take a shit, and bringing 16 nodes offline.

    BS! Management and maintenance are largely human resource bound and hence a significant cost factor.

    We generally only cable things once. We can't simply look at the human resource perspective of it. Sure it's a significant cost factor, but it's not a big enough cost factor for datacenters to decide to go all in on blades - if it was a big enough improvement, we'd see blade systems being deployed a lot more frequent that we currently do. Small and large scale.

    The benefits are generally not there. DC and providers in general do this math.

    And btw, you are jumping between a DC and a rack perspective which are very different wrt to maintenance and management.

    I know they're different, but that doesn't really change anything. But thanks for pointing it out.. I guess?

    So, you know better than me? I don't think so.

    Yes I do know better than you. There's plenty of people facepalming about your stupidity currently.

    I think that your whole position is incoherent, jumping between rather different positions (e.g. DC vs colo'd rack)

    Doesn't change much whether it's as a DC, or whether it's someone who leases a rack. In any case, I'd say people going for a single rack will have an easier time with anything really - they only have to care about themselves.

    As a DC provider (obviously there's different types of these as well, and how things are managed depends on what type of DC you are), we need to think on a bigger picture both for the whole DC, but obviously also for the individual racks. Racks might have very different requirements, so thinking about individual racks, even as a DC should be perfectly fine. We deploy various setups as a DC within various racks.

    There's not a "one size fits all" even for a DC, all customers or projects have different requirements, some things can go into very generic racks, other things have to go into purpose built racks, with specific power or network layout to support the given case.

    BS again! For one you don't get to define "high density" just the way you wish

    Well in fact one does get to define high density as one wish to define it. You have high density power racks, high density compute racks (which doesn't always mean high power racks), high density network capacity racks, high density storage racks, high density .*.

    The requirements for each type of rack will obviously differ - but they can all be high density.

    plus it makes a significant difference whether one cools 10 or 14 systems with a few e.g. 120 mm fans or with a lot of tiny 1 HU fans.

    Yes, it's correct that blades generally speaking will create nice hotspots in a datacenter. They're therefore often too dense. That's why one would have both power and cooling capacities for racks, and you're able to reach those limits.

    In the end it boils down to meeting the requirements, and pushing out the hot air. Again, we're maybe at 10kW limits, you'll reach your power and cooling capacity pretty quickly. So whether you have blades or 1U servers won't really matter much. You have a set limit in a rack.

    Plus, if space is of concern one can fit more performance into 2 socket systems than into 1 socket systems.

    Who would have thought. slow clap

    So a TLDR:

    • Blades doesn't really make sense (unless you like a rack that is a bit cleaner)
    • You can reach 3kW, 6kW or 10kW limits with either blades or 1U servers (and possibly even 2U servers, but anyway!)
    • 1U servers are perfectly fine to deploy, you get a nice density, 2U systems would probably actually be better.
    • You can get a lot of cores in a rack
    • Dell, HP, Supermicro, Lenovo and Gigabyte (and others) are all good brands, and all offer 2P servers with 7002 series EPYC. And you can build racks that are more dense (compute and power wise) than the physical space allows for.

    Obviously you're probably gonna write a long text to continue your argument, because you believe that blades are still superior to anything. I guess you're not really spending much time in a datacenter, you'd know better otherwise.

    But whatever floats your boat mate :) You can use blades if you believe that's superior to you - the rest of us, can continue with 1U and 2U systems with 1P or 2P systems depending on what we need, and we'll still be able to have a high density of compute power.

    Have a beautiful evening.

    Thanked by 2vimalware TimboJones
  • jsgjsg Member, Resident Benchmarker

    @Zerpy said:
    Obviously you're probably gonna write a long text to continue your argument, because you believe that blades are still superior to anything. I guess you're not really spending much time in a datacenter, you'd know better otherwise.

    Nope, I won't because you've demonstrated again and again that you don't really know and understand the topic (e.g. DC vs. colo'ing racks) nor how to properly argument (hint: arrogance and belittling don't really convince).

    In fact, you didn't even get the basics. My point was not "use blades! blades are superior!". My point was that there are no dual socket blade systems at all and only a few single socket blades from a single manufacturer.
    My other point was that blade systems do have advantages in quite a few use cases - but you repeatedly simply ignored the facts like e.g. simpler management.

    But I get it, you are not only more experienced and smarter than me but Cisco, Dell,Fujitsu, etc also know less than you and so do their customers who are "stupidly" buying blade servers.

  • @jsg said:
    But I get it, you are not only more experienced and smarter than me but Cisco, Dell,Fujitsu, etc also know less than you and so do their customers who are "stupidly" buying blade servers.

    They have their use-cases, maybe the lack of dual socket blades, shows a bit that it's not a use-case ;)

    Thanked by 1TimboJones
  • jsgjsg Member, Resident Benchmarker
    edited November 2020

    @Zerpy said:

    @jsg said:
    But I get it, you are not only more experienced and smarter than me but Cisco, Dell,Fujitsu, etc also know less than you and so do their customers who are "stupidly" buying blade servers.

    They have their use-cases, maybe the lack of dual socket blades, shows a bit that it's not a use-case ;)

    FYI I just had a quick look at Lenovo, Dell, and Cisco. They all have dual Xeon socket blades. And Supermicro (3 single socket Epyc blades) offers 8 dual socket Xeon blades.

    That shows a bit that dual socket blades is a use-case ;)

    Side note: I presume that HP also has dual socket Xeon blades but I didn't verify it because their web site is so acutely marketing bloated and deranged that I refuse to have my brain fried by looking at that marketing abomination web site.

  • Not much to FYI about - hardware suppliers build stuff based on customer demand - the demand has not been there, if there's a lack of such solutions.

    Customers who use EPYC CPUs and are deploying 2P are not really asking for blades - if it was the case, Dell and others are perfectly capable of building such systems.

    It's really simple.

  • FHRFHR Member, Host Rep

    BLADES

    The reason people use blades is that they're "easier to work with" and vendors like to push them for that reason (explanation below; 1). You've got a central chassis with a shared backplane, integrated network/FC/IB switch, central cooling, central power, central management module (IPMI)...
    "Wow it's so great, we don't need to buy a separate switch, it's all neatly integrated!!!!!!!"

    It's great, until you realize that if the chassis backplane itself dies (which does happen),
    you just lost 10+ servers and the network for them. Good luck troubleshooting and getting it back up and running (replacing with a spare 80kg chassis - minimum 2 people to handle that - and putting all blades and modules to the new chassis, recabling, reracking...).
    They also usually cost a lot more money than individual servers - which is why hosting providers don't typically use them. They are pretty popular in older fashioned enterprise setups though (nowadays sometimes being replaced by HYPERCONVERGED systems - which are basically never blades).

    The thing is, with a blade chassis full of dual socket servers, you will typically have only one or two such chassis in a rack due to power and cooling budgets anyway.

    The situation is a bit different with low powered servers (e.g. E3, Atom, Xeon D), where providers like to leverage products like Supermicro Microcloud - although these are not your typical "blade servers", since the only thing shared is the chassis, power and cooling. No data backplane for network switching, passthru's etc... The focus is low cost and high efficiency.


    @jsg said: But I get it, you are not only more experienced and smarter than me but Cisco, Dell,Fujitsu, etc also know less than you and so do their customers who are "stupidly" buying blade servers

    Just because there's a market for a product does not mean it's a great product. I would say traditional blades are mostly legacy equipment.
    (1) The reason vendors make these and push them is that these provide immense vendor lock in. You just spent 20 grand on a chassis, will you buy completely new servers from a potentially different vendor or just buy new blades for the existing chassis? Sounds like a juicy support contract as well. What if you want to upgrade that RAID controller or a NIC? $$$$$, or not even possible.

    @jsg said: Side note: I presume that HP also has dual socket Xeon blades

    BL460c


    @jsg said: embedded systems.

    Nobody cares about embedded, apart from budget hosting providers selling Atoms and Xeon Ds and Supermicro making them. Dual socket embedded systems are not a thing anyway.


    @jsg said: Large scale hosting sooner or later boils down to either custom hardware or blades.

    No. As aforementioned, blades are basically never used, if for nothing else, for the poor price/performance ratio.


    @jsg said: Assuming a 10 kW rack and 1HU Epyc servers even with just 500W per server the density will be very poor.

    What. That's literally the definition of high density servers. High compute per small volume - and with these systems, you will face the unfortunate reality of having to interleave them with blanking plates, because you can't exceed the DC's designed power/cooling density per rack.

    Some DCs oriented towards HPC can handle 25kW+ racks. In that case, yes, you can fill a cabinet with blades and you get "higher density". Overall, it will just cost you more money than filling a rack and half with 1U servers for the same performance.

    Thanked by 1vimalware
  • jsgjsg Member, Resident Benchmarker

    @FHR

    Yes, most hosters and particularly low end hosters, strongly prefer standard format servers over blades. But then their approach usually is based on one factor only: price, the lower the better. That's IMO the reason why there are so many (2nd or 3rd hand) HP servers in hoster's racks.
    Also the advantages of blades wrt management are less important to them as they need to have some tech staff anyway plus some kind of scripting is a core part of their business anyway.

    "backplanes can break" - yes, sure, but so can and so processors, Raid controllers, and many other items and btw, most newer blade systems have almost purely passive BPs and/or redundancy. And I have heard stories about failed backplanes very rarely from colleagues in DCs.

    @FHR said:
    The reason people use blades is that they're "easier to work with" and vendors like to push them for that reason (explanation below; 1). You've got a central chassis with a shared backplane, integrated network/FC/IB switch, central cooling, central power, central management module (IPMI)...
    "Wow it's so great, we don't need to buy a separate switch, it's all neatly integrated!!!!!!!"

    For some certainly true, but not what I'm interested in. But as you mentioned it: With a blade system you can have redundant central management and switches. That is something I'm really interested in. My main grievance always was "meh, only two drives" per blade (with many/most blades) or (IMO weird) storage blades that couldn't be shared though or only via network. But gladly one nowadays can get blades with 4, 6 (and possibly more) 2.5" drive slots e.g. from Dell.

    But again, be that as it may, my point wasn't "hosters should use blade systems!" anyway. My point was that there still are way, way less AMD based (especially server) systems of any kind available than intel systems, both new and 2nd hand. Plus unfortunately (I like AMD processors) there seems to be quite a distance in terms of time between some youtubers talking about "THE next big thing!!!" (which most of them do about monthly, for a living it seems) and real world availability for you and me and Joe.

Sign In or Register to comment.