Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How long does it take to build Raid10 Array fresh?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How long does it take to build Raid10 Array fresh?

CoreyCorey Member
edited July 2012 in General

So I'm building a raid 10 array on an adaptec 6405e with 4 300GB WD Raptors..... I did a 'secure erase' from the adaptec menu and then selected 'build & verify' on the raid creation menu. I am only getting about 87MB/s on a dd test from within the centos6 install I did.... shouldn't I be getting in excess of 200MB/s ? I checked the drives with the latest version of smartctl and they are all fine.

(To get the 6405e to work with centos I installed the drivers adaptec made available pre install then once installed I upgraded the kernel and installed kmod-aacraid module because on kernel update the adaptec drivers do not stick. )

Comments

  • Less than an hour

  • CoreyCorey Member

    @ShardHost well this was over 12 hours ago... I'm pretty confused as to the cause of the slow IO. I'm using 512k block size but it doesn't matter what block size I use on the DD test as it is pretty much the same speed.

  • Is it all Raid 10?

  • CoreyCorey Member

    @ShardHost 4 disks in raid10 yes.

  • vdnetvdnet Member

    Have you tested the disk speeds individually?

  • CoreyCorey Member
    edited July 2012

    @vdnet I did not, have tested them before and I know they get over 100MB/s each unless they are bad. SMART is reporting they are good. Also ran smart self tests on these drives and they pass self tests.

  • CoreyCorey Member

    Update with dd->

    [root@s9 ~]# dd if=/dev/zero of=test count=4096 bs=1024k
    4096+0 records in
    4096+0 records out
    4294967296 bytes (4.3 GB) copied, 13.7691 s, 312 MB/s
    [root@s9 ~]# dd if=/dev/zero of=test count=4096 bs=1024k
    4096+0 records in
    4096+0 records out
    4294967296 bytes (4.3 GB) copied, 47.9431 s, 89.6 MB/s
    [root@s9 ~]# rm test
    rm: remove regular file `test'? y
    [root@s9 ~]# dd if=/dev/zero of=test count=4096 bs=1024k
    4096+0 records in
    4096+0 records out
    4294967296 bytes (4.3 GB) copied, 13.9655 s, 308 MB/s
    [root@s9 ~]#

    If the file 'test' doesn't exist I get 312MB/s
    If the file 'test' does exist I get 89.6MB/s
    Then you can see I remove the file and run the test again and get close to 312MB/s again.

    If I use conv=fdatasync I get about 87MB/s - I can't make sense of what is going on here when the disks should have over 100MB/s each.

    [root@s9 ~]# rm test
    rm: remove regular file `test'? y
    [root@s9 ~]# dd if=/dev/zero of=test count=4096 bs=1024k conv=fdatasync
    4096+0 records in
    4096+0 records out
    4294967296 bytes (4.3 GB) copied, 48.9699 s, 87.7 MB/s

  • CoreyCorey Member
    edited July 2012

    In all my tests it is faster if you remove the test file first on all the servers that I have... but I am still unable to produce anything over 100MB/s... and it's ticking me off. Even the IOPS are terrible!

    --- /dev/sda2 (device 557.5 Gb) ioping statistics ---
    445 requests completed in 3003.6 ms, 150 iops, 0.6 mb/s
    min/avg/max/mdev = 2.0/6.7/15.7/2.1 ms

    I get that on a busy raid 1 node with two of the same disks...
    --- /dev/sda2 (device 278.5 Gb) ioping statistics ---
    425 requests completed in 3004.4 ms, 143 iops, 0.6 mb/s
    min/avg/max/mdev = 1.8/7.0/18.8/2.7 ms

  • jarjar Patron Provider, Top Host, Veteran

    You try kicking it? It's a very technical approach but it works.

    Thanked by 1Amitz
  • CoreyCorey Member

    No - here is the info from the controller, maybe more info if anyone else can help? (No I do not have write cache enabled, does this need to be enabled to get the performance I am looking for? Without it you get less performance than a raid1 setup? I thought this basically wrote in a raid0 fashion (stripe of mirrors) so I should be getting double performance of raid1 out of the box with no cache right?

    What is this - Performance Mode : Default/Dynamic ?????

    Controller Status : Optimal
    Channel description : SAS/SATA
    Controller Model : Adaptec 6405E
    Controller Serial Number : 1A4211CC920
    Physical Slot : 128
    Temperature : 54 C/ 129 F (Normal)
    Installed memory : 128 MB
    Copyback : Disabled
    Background consistency check : Disabled
    Automatic Failover : Enabled
    Global task priority : High
    Performance Mode : Default/Dynamic
    Stayawake period : Disabled
    Spinup limit internal drives : 0
    Spinup limit external drives : 0
    Defunct disk drive count : 0
    Logical devices/Failed/Degraded : 1/0/0
    MaxCache Read, Write Balance Factor : 3,1
    NCQ status : Enabled
    Statistics data collection mode : Enabled


    Controller Version Information


    BIOS : 5.2-0 (18512)
    Firmware : 5.2-0 (18512)
    Driver : 1.1-7 (28000)
    Boot Flash : 5.2-0 (18512)


    Logical device information

    Logical device number 0
    Logical device name : somr10
    RAID level : 10
    Status of logical device : Optimal
    Size : 571382 MB
    Stripe-unit size : 256 KB
    Read-cache mode : Enabled
    Write-cache mode : Disabled (write-through)
    Write-cache setting : Disabled (write-through)
    Partitioned : Yes
    Protected by Hot-Spare : No
    Bootable : Yes
    Failed stripes : No
    Power settings : Disabled


    Logical device segment information


    Group 0, Segment 0 : Present (Controller:1,Connector:0,Device:0) WD-WXD1E71MJJE7
    Group 0, Segment 1 : Present (Controller:1,Connector:0,Device:1) WD-WXL408048838
    Group 1, Segment 0 : Present (Controller:1,Connector:0,Device:2) WD-WXF1E81PXPE2
    Group 1, Segment 1 : Present (Controller:1,Connector:0,Device:3) WD-WXC0CA9K7262


    Physical Device information

      Device #0
         Device is a Hard drive
         State                              : Online
         Supported                          : Yes
         Transfer Speed                     : SATA 6.0 Gb/s
         Reported Channel,Device(T:L)       : 0,0(0:0)
         Reported Location                  : Connector 0, Device 0
         Vendor                             : WDC
         Model                              : WD3000HLHX-01JJP
         Firmware                           : 04.05G04
         Serial number                      : WD-WXD1E71MJJE7
         Size                               : 286168 MB
         Write Cache                        : Enabled (write-back)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off,Reduced rpm
         NCQ status                         : Enabled
      Device #1
         Device is a Hard drive
         State                              : Online
         Supported                          : Yes
         Transfer Speed                     : SATA 3.0 Gb/s
         Reported Channel,Device(T:L)       : 0,1(1:0)
         Reported Location                  : Connector 0, Device 1
         Vendor                             : WDC
         Model                              : WD3000GLFS-01F8U
         Firmware                           : 03.03V01
         Serial number                      : WD-WXL408048838
         Size                               : 286168 MB
         Write Cache                        : Enabled (write-back)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off
         NCQ status                         : Enabled
      Device #2
         Device is a Hard drive
         State                              : Online
         Supported                          : Yes
         Transfer Speed                     : SATA 6.0 Gb/s
         Reported Channel,Device(T:L)       : 0,2(2:0)
         Reported Location                  : Connector 0, Device 2
         Vendor                             : WDC
         Model                              : WD3000HLHX-01JJP
         Firmware                           : 04.05G04
         Serial number                      : WD-WXF1E81PXPE2
         Size                               : 286168 MB
         Write Cache                        : Enabled (write-back)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off,Reduced rpm
         NCQ status                         : Enabled
      Device #3
         Device is a Hard drive
         State                              : Online
         Supported                          : Yes
         Transfer Speed                     : SATA 3.0 Gb/s
         Reported Channel,Device(T:L)       : 0,3(3:0)
         Reported Location                  : Connector 0, Device 3
         Vendor                             : WDC
         Model                              : WD3000HLFS-01G6U
         Firmware                           : 04.04V06
         Serial number                      : WD-WXC0CA9K7262
         Size                               : 286168 MB
         Write Cache                        : Enabled (write-back)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off
         NCQ status                         : Enabled
    
  • gbshousegbshouse Member, Host Rep
    edited July 2012

    @Corey - can you try to build RAID1 first using disks #0 and #2, beside that enable Write-cache. Strange thing is that you are building RAID using 2 different types of HDDs (WD3000HLHX and WD3000HLFS)

  • PADPAD Member

    Just so you know, the ioping results you listed aren't titled, so we don't know what type of tests they are.

    For example..


    starting ioping tests...


    ioping disk I/O test (default 1MB working set)


    disk I/O: /dev/sda
    --- /dev/sda (device 3726.0 Gb) ioping statistics ---
    5 requests completed in 4077.4 ms, 65 iops, 0.3 mb/s
    min/avg/max/mdev = 12.6/15.3/21.3/3.1 ms


    seek rate test (default 1MB working set)


    seek rate: /dev/sda
    --- /dev/sda (device 3726.0 Gb) ioping statistics ---
    215 requests completed in 3013.7 ms, 72 iops, 0.3 mb/s
    min/avg/max/mdev = 4.6/13.9/26.0/3.7 ms


    sequential test (default 1MB working set)

    **********************************************

    sequential: /dev/sda
    --- /dev/sda (device 3726.0 Gb) ioping statistics ---
    3643 requests completed in 3000.0 ms, 1316 iops, 329.1 mb/s

    min/avg/max/mdev = 0.4/0.8/15.9/0.5 ms

    sequential cached I/O: /dev/sda
    --- /dev/sda (device 3726.0 Gb) ioping statistics ---
    7155 requests completed in 3001.3 ms, 2805 iops, 701.3 mb/s
    min/avg/max/mdev = 0.1/0.4/25.5/0.9 ms

  • CoreyCorey Member
    edited July 2012

    @gbshouse I tested just the HLFS and it was getting over 90MB/s by itself. I've also tested other HLHX and they get over 100MB/s by their self... so that only leaves the GLFS which from my research is ON PAR with HLFS and was released after HLFS. These drives are all very similar and it isn't odd at all to use different model disks in an array.

    @PAD
    It was Measure disk seek rate (iops, avg) - so it shouldn't be more on the raid 10 volume only if I did the Measure disk sequential speed test?

    Disk Sequential tests

    Raid1
    -- /dev/sda2 (device 278.5 Gb) ioping statistics ---
    1292 requests completed in 3001.9 ms, 442 iops, 110.4 mb/s
    min/avg/max/mdev = 0.5/2.3/87.5/3.0 ms
    Raid10
    --- /dev/sda2 (device 557.5 Gb) ioping statistics ---
    1682 requests completed in 3001.4 ms, 582 iops, 145.6 mb/s
    min/avg/max/mdev = 1.6/1.7/10.4/0.4 ms

    Still not impressive...

  • gbshousegbshouse Member, Host Rep

    @Corey - Since WD3000HLHX are twice faster than WD3000HLFS I would try them first = create simple RAID 1 with write-cache enabled and check the results

  • CoreyCorey Member

    @gbshouse they are not twice as fast, they support a pipe that is twice as fast. These drives can't push over 300MB/s alone. Why do you suggest write-cache enabled?

  • gbshousegbshouse Member, Host Rep

    @Corey - "network administrators know that enabling the RAID controller cache offers significant performance benefits, such as reduced latency in I/O requests, bandwidth and queue depths that surpass software application limits, and on-the-fly parity calculations on sequential writes."

  • CoreyCorey Member

    @gbshouse I do know this, but I'm expecting double the performance WITHOUT write cache, shouldn't I be getting it?

  • gbshousegbshouse Member, Host Rep

    @Corey - as for me you should get better results WITH cache enabled

  • CoreyCorey Member
    edited July 2012

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 6.5846 s, 163 MB/s

    This is with write cache on.... why do you get expected performance with write cache on but when it is off you do not? (Actually I would expect to get over 1GB/s with cache, but I'm guessing fdatasync fixes that.) Does software raid use some sort of write cache to help saturate the disk interface as well? What about cards that have no write cache... how would they have the same performance?

    Also - it seems like turning on or off the disk cache makes no difference and that disk cache isn't able to be backed up on a BBU am I correct?

Sign In or Register to comment.