Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Anyone used 4 x SSDs with SW RAID10?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Anyone used 4 x SSDs with SW RAID10?

If so what was the ioping and dd results?

Comments

  • We have in a few different builds in the past few months. Will post the results and SSD models soon to follow for benchmark comparison.

  • @volumedrive said:
    We have in a few different builds in the past few months. Will post the results and SSD models soon to follow for benchmark comparison.

    I love you.

    Thanked by 2volumedrive jar
  • jarjar Patron Provider, Top Host, Veteran

    Surely. I wouldn't be surprised if @Nick_A had at least tested some. I know he loves to toy with different configurations.

  • It's useless unless you're talking about Haswell. There aren't enough SATA3 ports. That was the only reason to get a RAID card.

  • Xeon E3 V3 boards should have enough SATA3 ports?

  • @rds100 said:
    Xeon E3 V3 boards should have enough SATA3 ports?

    I did say unless you are talking about Haswell?

    Thanked by 1rds100
  • @concerto49 ok, correct :)

  • Nick_ANick_A Member, Top Host, Host Rep

    Is this a marketing thread?

  • @Nick_A said:
    Is this a marketing thread?

    get out of here

  • @serverian said:
    get out of here

    possible to get larger disk for VPSDime 6GB RAM plan?

  • @kyaky said:
    possible to get larger disk for VPSDime 6GB RAM plan?

    It became a marketing thread :)

  • @concerto49 said:
    It became a marketing thread :)

    Forgot to reply your msg... I will think about which plan I want first. Thanks for the GST help.

  • Nick_ANick_A Member, Top Host, Host Rep

    @serverian said:
    get out of here

    smh

  • serverianserverian Member
    edited November 2013

    4 x Intel 530 240GB SWRAID10 Results:

    [root@host ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.50187 s, 429 MB/s

    [root@host ~]# dd if=/dev/zero of=test bs=1024k count=16k conv=fdatasync; unlink test
    16384+0 records in
    16384+0 records out
    17179869184 bytes (17 GB) copied, 33.6 s, 511 MB/s

    [root@host ioping-0.6]# ./ioping -c10 .
    4096 bytes from . (ext4 /dev/md2): request=1 time=2.1 ms
    4096 bytes from . (ext4 /dev/md2): request=2 time=6.0 ms
    4096 bytes from . (ext4 /dev/md2): request=3 time=6.0 ms
    4096 bytes from . (ext4 /dev/md2): request=4 time=166.8 ms
    4096 bytes from . (ext4 /dev/md2): request=5 time=216.1 ms
    4096 bytes from . (ext4 /dev/md2): request=6 time=6.6 ms
    4096 bytes from . (ext4 /dev/md2): request=7 time=2.2 ms
    4096 bytes from . (ext4 /dev/md2): request=8 time=6.0 ms
    4096 bytes from . (ext4 /dev/md2): request=9 time=2.3 ms
    4096 bytes from . (ext4 /dev/md2): request=10 time=1.4 ms

    --- . (ext4 /dev/md2) ioping statistics ---
    10 requests completed in 9417.0 ms, 24 iops, 0.1 mb/s
    min/avg/max/mdev = 1.4/41.6/216.1/75.8 ms
    [root@host ioping-0.6]# ./ioping -RD .

    --- . (ext4 /dev/md2) ioping statistics ---
    19309 requests completed in 3000.1 ms, 9705 iops, 37.9 mb/s
    min/avg/max/mdev = 0.0/0.1/4.0/0.1 ms
    [root@host ioping-0.6]# ./ioping -R .

    --- . (ext4 /dev/md2) ioping statistics ---
    18800 requests completed in 3000.0 ms, 9400 iops, 36.7 mb/s
    min/avg/max/mdev = 0.0/0.1/49.7/0.4 ms

  • @serverian this doesn't look good. Is the raid resyncing at the moment or something?
    Also what are the results for RAID1 with the same drives?
    And have you tweaked /sys/class/block/sd?/queue/

  • Haswell's C226 based boards with 6x SATA3 chokes up at around 800M-1G based on RAID0 testing, you won't get >1G write across all drives so RAID10 would be around what you see now which is 500M or so.

    Only way to get close to 1G on RAID10 is to run 2 drives onboard and 2 drives on a separate controller. LSI 9220s are cheap, support SATA3 and can do 1.6G across 4x SSD SoftRAID0.

  • @Kenshin said:
    Haswell's C226 based boards with 6x SATA3 chokes up at around 800M-1G based on RAID0 testing, you won't get >1G write across all drives so RAID10 would be around what you see now which is 500M or so.

    Only way to get close to 1G on RAID10 is to run 2 drives onboard and 2 drives on a separate controller. LSI 9220s are cheap, support SATA3 and can do 1.6G across 4x SSD SoftRAID0.

    Do you mean using LSI 9220 as HBA and not RAID and instead doing the SWRAID with drives connected to it?

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited November 2013

    @serverian said:
    4 x Intel 530 240GB SWRAID10 Results:

    What kernel is that running on? I get more than that with 2 x samsung 830's in mdadm raid 1 on 3.4.x kernels. pust your cat /proc/mdstat please, I suspect you have bitmapping enabled to get such poor ioping results.

  • leapswitchleapswitch Patron Provider, Veteran

    We are using 4x250GB Samsung EVO in SW RAID10 for all new Shared Hosting setups, and 4x160GB Intel 320 in SW RAID10 for all earlier setups. Performance, reliability and accounts capacity is significantly better than our previous SW RAID10 SATA setups.

  • I hope there's either a resync/rebuild in progress or you have bitmap cache enabled as i would expect far better than that. I have servers with 4x 1TB SATA SW RAID-10 pulling over 300MB/s.

  • KenshinKenshin Member
    edited November 2013

    serverian said: Do you mean using LSI 9220 as HBA and not RAID and instead doing the SWRAID with drives connected to it?

    Yes, in my earlier tests I managed to do only 1.3G RAID0 HWRAID on the LSI 9220 likely due to the lack of cache on the card itself.

    LSI 9220 (4) HWRAID0 = 1.3G Write

    LSI 9220 (4) MDRAID0 = 1.5G Write

    LSI 9220 (2) + Intel C204 SATA3 (2) MDRAID = 1.6G Write, 2.0G Read

    Intel C226 SATA3 (4) MDRAID = 800M-1G Write, 1G Read

    My conclusions were:

    1) MDRAID performed better than HWRAID on cacheless LSI 9220

    2) Intel C226 with 6 SATA3 ports can only achieve max throughput of 1G. I assume C224 with 4 SATA3 ports should be similar. Looking at Intel's block diagram for the C22x chipsets, I'm very suspicious the total bandwidth is only 6Gb/sec across all the ports instead of having sufficient bandwidth per-port, but I didn't do enough testing to confirm this.

  • @Ash_Hawkridge : So you have another business started? Will you sell it again in the future?

  • Kenshin said: Looking at Intel's block diagram for the C22x chipsets, I'm very suspicious the total bandwidth is only 6Gb/sec across all the ports instead of having sufficient bandwidth per-port, but I didn't do enough testing to confirm this.

    It would max out at 600-750 GB/sec then, not at 800-1000.

  • The limit is the 4GB/s DMI link from the C226 chipset to the CPU on Lynx Point. It is bi-directional.

    Also RAID0 on C226 tops at about 1.2GB. So that's the limitation.

  • rds100 said: It would max out at 600-750 GB/sec then, not at 800-1000.

    Yeah, but the numbers are pretty close around there.

    concerto49 said: Also RAID0 on C226 tops at about 1.2GB. So that's the limitation.

    Guess that's the limitation of the chipset, 6x SATA3 ports would pull 200M/port on average, might as well stick to SATA2 but already standardized my purchases to the X10SLH so well.

  • @Kenshin said:
    Guess that's the limitation of the chipset, 6x SATA3 ports would pull 200M/port on average, might as well stick to SATA2 but already standardized my purchases to the X10SLH so well.

    You can use 4 drives, making 300M/port :p

  • qpsqps Member, Host Rep

    Maybe try with the X10SL7-F (LSI2308 integrated)?

  • DewlanceVPSDewlanceVPS Member, Patron Provider

    I use this, dd result without virtualization,vm is 1GB/s, with xen virtualization sometimes speed will show 1GB/s or 500MB/s to 700MB/s

  • rds100 said: It would max out at 600-750 GB/sec then, not at 800-1000.

    @rds100 GB/sec :o

    BTW My Seagate HDD gave me this:-

    Avg. Read and Write Rate: 150 MB/s

  • @MikeIn ok, make that MB/sec :)

Sign In or Register to comment.