Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


PCI-E SSD experiences?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

PCI-E SSD experiences?

deployvmdeployvm Member, Host Rep
edited May 2016 in General

I would be interested to hear if anyone have used a PCI-E SSD before, and especially for virtualisation purposes?

I'm more concerned with reliability, and durability on long-term.
We all know the huge performance advantage with PCI-E SSDs.

How well does it work on a production environment? Is a single SSD card enough to last several years on average? / normal usage, comparable to SATA SSDs?

It may even seen a little awkward to configure S/W RAID on it, and do LVMs/volume group.

Thanks!

Comments

  • zrunnerzrunner Member
    edited May 2016

    I used 1x OCZ RevoDrive X2 120GB (i think it was X2) on desktop/workstation, i'm currently using 1x Asus raidr express 240GB on desktop/workstation, and i'm currently using 1x Intel SSD 910 800GB and 1x Intel SSD 910 400GB, on servers.

    OCZ and Asus are consumer devices and Intel's are enterprise.

    I used the ocz very briefly so can't say much about it and its rather old now, i got the asus one on my main computer using it for games, performance wise its getting outdated by just (motherboard) raiding 2 new ssds in raid0 (raidr has 2x 120GB in raid0)

    The Intel one's are in two servers i use for ESXi, they show up as 200GB drives, so the 800GB you get 4x 200GB drives, there's no bios setup/menu for raiding them.

    The 800GB one is great (not saying the 400GB is bad) you can even boost it by using a intel software and you get about 50% write performance boost but it puts out a lot more heat, i think its max 25W normal mode and with "boost" on its like 40W.
    The 800GB has almost 1800GB of raw space for wear leveling etc and has over 10PB (PetaByte) write endurance.

    I used mine for about 1,5 Year and they're both in servers i use for personal usage so not getting hammered that hard.

    I think all 3 of the listed products have reached EOL (End Of Life)
    But if you're able to get your hands on a Intel 910 SSD i fully recommend them, if you can find them cheap as they're getting rather old (2012) and performance wise im sure other enterprise pci-e ssd's are better.
    Do remember they are enterprise products so they cost a lot more than a normal Samsung consumer ssd :)

    Thanked by 1deployvm
  • MicrolinuxMicrolinux Member
    edited May 2016

    The difference is in the interface, not the SSD itself.

    I don't really really think it makes sense to switch for PCI-E for reliability - see my first point, and then factor in that you lose the ability to hotswap.

    You won't notice any difference in production between a PCI-E or SATA SSD unless your sequential throughput is such that it exceeds SATA speeds, in terms of performance.

    Thanked by 1deployvm
  • zrunnerzrunner Member
    edited May 2016

    A lot has to do with what controller they're using, so the difference can be in both the interface and the ssd itself. Those of us that has some (basic) knowledge of different ssd controllers somewhat know how the drives will perform.

    If you do see that you're bottleneck by the SATA3 speeds, PCI-E will of course give you a boost.

    Just pulling up some manufacturer specs for the Samsung 850 PRO 1TB (2015 consumer) and the Intel 910 SSD 800GB (2012 enterprise)

    the Samsung will max at ~520MB/s read/write because of SATA3 (bottleneck?)


    the Intel 910 will max 2GB/s read and 1GB/s write (or 1,5GB/s in boost)


    Samsung: 100K/90K IOPS read/write.


    Intel: 180K/75K IOPS read/write, on iops write samsung beats the intel.

    Power consumtion you can load up about 10 samsung's vs 1 intel (on boost)


    Endurance: here's a huge difference which also has to do with consumer vs enterprise.


    Samsung has 150TB. Intel has 14336TB (14PetaByte).

    Thanked by 1deployvm
  • LiteServerLiteServer Member, Patron Provider

    Both 2,5" and PCI-E SSD's should last as long. Especially when you're using TRIM. Overprovisioning is something to consider as well if you're unable to use TRIM.
    PCI-E drives reach their nice I/O due the way they are designed. With a traditional SATA SSD, the SATA-600 port is what limits the I/O performance - especially the sequential I/O numbers.
    PCI-E SSD's usually have more chips which are all basically set in "RAID-0" by the controller to increase performance.
    One problem with PCI-E SSD's is the lack of hotswap. But if that would be such a problem these days.... modern SSD's (MLC chips) can deal with a ton of TBs (if not PBs) of data, especially with TRIM or overprovisioning. Just keep an eye on the SMART data of the SSD (wear leveling), and you'll get an idea how far the SSD is "gone".

    For normal/office use, it's pretty much impossible to wear out a modern SSDs these days.
    Wearing out a modern SSD is harder than most people would think.

    Thanked by 1deployvm
  • deployvmdeployvm Member, Host Rep

    Thanks for your comments! :)

  • WootWootWootWoot Member, Host Rep, LIR

    To answer the first part of your question, we have provisioned few Intel 750 PCI-E SSD a while ago for a customer doing virtualization and needing a more affordable alternative to a RAID-10 array of Samsung 850 Pro SATA SSD's, while keeping similar performance. He uses a single card per server. The Intel 750 is rated for 127 TBW, whilst the 512+GB Samsung 850 Pro is rated for 300 TBW. However, as mentioned earlier, with normal usage, this limit is hardly ever reached. Been running great so far. Keeping spares on-site, however.

    Thanked by 1deployvm
  • LiteServerLiteServer Member, Patron Provider
    edited May 2016

    The "TBW" value is not that important. Plenty of SSD's can exceed this value without any issues, while others may fail before ever reaching this value.
    Has to do with all sorts of factors like write amplification. An SSD which is continuously being filled to 95% will have much and much more write amplification than one being filled to only 15%.
    So because of that the SSD S.M.A.R.T. status needs to be monitored - this will indicate the age/wear off the NAND cells. In most cases the S.M.A.R.T. value "202" reports the health of the SSD :-)

Sign In or Register to comment.