Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Subscribe to our newsletter

Advertise on LowEndTalk.com

Latest LowEndBox Offers

    SSD RAID10 exptected I/O performance
    New on LowEndTalk? Please read our 'Community Rules' by clicking on it in the right menu!

    SSD RAID10 exptected I/O performance

    emilvemilv Member

    Hello, what would you think I should expect in performance (dd test for example) in running 4x 512GB Samsung 850 Pro's in RAID10 via Dell H700/H800 w/512MB cache Raid controller?

    Anyone running similar setup that can show their dd test?

    Comments

    • dediservedediserve Member, Provider

      On a HP Array we see 1.4GB/sec on a standard DD

      Thanked by 1emilv
    • Currently I'm getting around 450MB/s and I cant figure out why it's so low.

    • dediservedediserve Member, Provider

      That is low, certainly for those drives too (we've tested those very same drives and deployed some for clients).

    • Is write caching disabled for the array controller because a BBU issue or similar?

    • SSDBlazeSSDBlaze Member, Provider

      We see writes at about 580MB/s with RAID10 512GB 850 PROs

      Same controller, I was also expecting something higher when I deployed the machine.

    • I've tried to both have it enabled and disabled. Disabled, the performance is a little worse.

    • emilvemilv Member
      edited August 2015

      @SSDBlaze Do you know what setting you have in the Dell BIOS regarding Power Management?

      Static MAX Performance DBPM Disabled ( BIOS will set P-State to MAX) Memory frequency = Maximum Performance Fan algorithm = Performance

      OS Control Enable OS DBPM Control (BIOS will expose all possible P states to OS) Memory frequency = Maximum Performance Fan algorithm = Power

      Active Power Controller Enable DellSystem DBPM (BIOS will not make all P states available to OS) Memory frequency = Maximum Performance Fan algorithm = Power

      Custom CPU Power and Performance Management: Maximum Performance | Minimum Power | OS DBPM | System DBPM Memory Power and Performance Management: Maximum Performance |1333Mhz |1067Mhz |800Mhz| Minimum Power Fan Algorithm Performance | Power

      I saw on one forum that changing to the Maximun Performance setting increased performance.

    • SSDBlazeSSDBlaze Member, Provider

      I'll have to check that out asap and get back to you

      I didnt change any of those settings after deploying it. Maybe we can both improve the speeds by checking that out :) @emilv

    • @emilv @dediserve i running 4x 512GB Samsung 850 Pro's in RAID10 via HP P420, dd test i get 1GB/s

      Thanked by 1dediserve

      https://profvps.com - VPS KVM SSD (USA) /w Promo Recurring: 10OFF | Skype: hoang.truong040

    • dediservedediserve Member, Provider

      We use the Smart Array P420i controller with 2 GB flash-backed write cache

    • @dediserve yes, i run that same controller but some time dd test just get 450-550MB/s. Don't know why that.

      https://profvps.com - VPS KVM SSD (USA) /w Promo Recurring: 10OFF | Skype: hoang.truong040

    • I would expect some 1 GB due to cache, you should get close to those numbers without any cache at all.

      Extremist conservative user, I wish to preserve human and civil rights, free speech, freedom of the press and worship, rule of law, democracy, peace and prosperity, social mobility, etc. Now you can draw your guns.

    • WilliamWilliam Member, Provider

      Essentially 1GB/s for both R/W - 2 SSDs at a maximum of 550MB/s (SATA3 minus overhead) = 1100MB/s for write.

    • emilvemilv Member
      edited August 2015

      If that power setting does not do anything, I'm beginning to wonder if that controller is to blame. Some while ago they accepted only SSD's from Dell but it was changed, so maybe it could be that the firmware isnt compatible or optimized for consumer grade SSD's.

    • AnthonySmithAnthonySmith Top Provider

      If possible try putting all 4 in raid 0 and test it again, should help rule out controller bottle necks or not.

      also (not that it should make a huge difference) what file system are you using and which kernel?

      Had enough of the scams on lowendbox, lowendtalk is now being infiltrated by corruption so I have chosen to make an low end exit #lexit for now - you can find me HERE

    • CentOS 6 x64, ext4

    • What DD test you running, different formats,values will get you different results.

      Not sure if I've missed it but nobody has posted to compare the command being used.

    • dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test and other sizes as well.

    • A side note, what stripe size do you guys use for your RAID10 configs?

    • jazz1611jazz1611 Member
      edited August 2015
      Stripe size: 512KB
      Full stripe size: 1024KB
      
      Thanked by 1emilv

      https://profvps.com - VPS KVM SSD (USA) /w Promo Recurring: 10OFF | Skype: hoang.truong040

    • jmginerjmginer Member, Provider
      edited August 2015

      LSI RAID-10 + BBU (Write back mode)

      4 x 850 Pro

      # cd /vz; dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; rm -rf test;
      16384+0 records in
      16384+0 records out
      1073741824 bytes (1.1 GB) copied, 1.09556 s, 980 MB/s
      
      Voxility DDoS protected BGP starting from 250 EUR/month. Contact us.
      SSD VPS in Spain ☛ 9.99€/year ★ We accept Bitcoins! ★ DMCA ignore ★
    • MarkTurnerMarkTurner Member
      edited August 2015

      @emilv - what storage enclosure is this installed in? Is it a contended or uncontended backplane? Are all the disks connecting as SATA3 devices or SATA2?

    • It's a Dell R410.

    • @emilv - Doesn't mean anything to me, I don't use Dell kit. You need to determine whether the backplane is contended and also whether the disks are even connecting as SATA3

    • Yeah, thanks for the info.

    • All disks report

      Device Speed: 6.0Gb/s
      Link Speed: 6.0Gb/s

    • You still need to see whether the backplane is contended, I assume you have a SAS8087 connector on your card and on your backplane? Are all 4 channels being used? Or is there some 'expander' chip on the board which is aggregating the 4 disks in 1-2 channels?

      Can you set the RAID controller into HBA mode and then test the raw disk performance?

      What about RAID0 with a single disk, what is performance like then?

      Do you have a battery on the controller cache?

    • Since there are only 4 disks I think it is safe to assume each with own channel.

      Extremist conservative user, I wish to preserve human and civil rights, free speech, freedom of the press and worship, rule of law, democracy, peace and prosperity, social mobility, etc. Now you can draw your guns.

    • Maounique said: Since there are only 4 disks I think it is safe to assume each with own channel.

      Don't be sure of that at all! I have seen some backplanes where were engineered for 8 disks, 4 disks over 2 channels. Then the 4 disk blackplane was just cut in half - 4 disks over 2 channels.

      That was a Quanta heap of crap and Quanta build stuff for all different vendors.

    • Interesting, will have to check that. Thanks!

    Sign In or Register to comment.