Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


RAID 10 Verify Completion Time
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

RAID 10 Verify Completion Time

Hi Guys,

I have 24x4tb machine with RAID10 Setup.

And when i came to know that all the read and write process are slow in those machine. I have a look in controller status.

And i find they are in status of verifying.

Right now it shows like this http://prntscr.com/1zkfjj

I need to know why it is running and will this run often?

How long it takes to complete?

«1

Comments

  • AnthonySmithAnthonySmith Member, Patron Provider

    How long is a piece of string?

    more info please?

  • @AnthonySmith said:
    How long is a piece of string?

    more info please?

    What are the info I have to post? Can you say which command will yeild the stats you asking for?

  • AnthonySmithAnthonySmith Member, Patron Provider

    Raid card type, cache size, is it mbraid/swraid/dmraid/mdraid? also you have the stripe size on 256k which is probably less than ideal for that size array, best guess 1024k would be better.

  • mikegmikeg Member
    edited October 2013

    You can see the completion progress in the column RCmpl. Time how long this takes to move 1 percent then you could work out how long the whole thing is going to take. I'm guessing your RAID card is 3Ware?

    And I agree with AnthonySmith, stripe size of 1MB would be better. Could you recreate the array?

  • @AnthonySmith said:
    Raid card type, cache size, is it mbraid/swraid/dmraid/mdraid? also you have the stripe size on 256k which is probably less than ideal for that size array, best guess 1024k would be better.

    I know only few stats.

    It is H/W Raid and yes it is 3Ware. I dont mind setting that 256k to 1mb or more.

    @mikeg said:
    You can see the completion progress in the column RCmpl. Time how long this takes to move 1 percent then you could work out how long the whole thing is going to take. I'm guessing your RAID card is 3Ware?

    And I agree with AnthonySmith, stripe size of 1MB would be better. Could you recreate the array?

    I can recreate the array with 1mb or more. But will that cause loss of data ?

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited October 2013

    @filegrasper said:
    I can recreate the array with 1mb or more. But will that cause loss of data ?

    Depends on the card, you need to read the manual :)

    Also you did not confirm, does the card have a cache+bbu as that makes a HUGE difference in rebuild times and on a raid array that size I would even say the difference between 6 hours and 6 days.

    curious if it is for storage why you did not go for a 22x 4TB Raid 6 along with 2 x 128GB SSD in raid 0 with flashcache as that would out perform your raid 10 and give a MASSIVE amount of storage in comparison.

  • @filegrasper said:
    I can recreate the array with 1mb or more. But will that cause loss of data ?

    It will cause all data to be lost, so don't do it if there is data there already.

  • @AnthonySmith said:
    curious if it is for storage why you did not go for a 22x 4TB Raid 6 along with 2 x 128GB SSD in raid 0 with flashcache as that would out perform your raid 10 and give a MASSIVE amount of storage in comparison.

    Before I had give a try with 12x4tb RAID 6 and the performance are very low.

    Now I moved to 24x4tb RAID 10 and 2x120gb SSD with RAID 1 for OS. Think this will get some decent speed.

    And also yes it has bbu+cache think so.

    BTW does 256kb to 1mb or 2mb will improve the performance much faster? If so in what condition only or rebuild or in every process it do ?

  • @mikeg said:
    It will cause all data to be lost, so don't do it if there is data there already.

    I do have two machine with same setup and datas are moving in to it. So I can reformat it if that strip size really increase the performance much better.

    Please let me know what are the process it will improves?

  • What is the array going to be used for?

  • @mikeg said:
    What is the array going to be used for?

    The array is going to use for storing a lot of datas where uploads and downloads are always at 5gbps+ speed.

  • So it will be storing large sized files? When storing large files, it is best to use a larger stripe size to increase performance.

  • @mikeg said:
    So it will be storing large sized files? When storing large files, it is best to use a larger stripe size to increase performance.

    The average file size are varies from 500mb to 10gb per file so whats the recommended strip size ?

  • Choose the largest stripe size your RAID card will allow then. This will most likely be 1MB, you should get increased performance because all your files are larger than 1MB.

  • filegrasperfilegrasper Member
    edited October 2013

    @mikeg said:
    Choose the largest stripe size your RAID card will allow then. This will most likely be 1MB, you should get increased performance because all your files are larger than 1MB.

    Won't my card allows changing strip size on the go with out reformatting?
    As I have 3ware , whats the maximum size it allows ?

    This pages says it is possible without reformatting.

    https://www.3ware.com/3warekb/attachments/Pages from 9650SE UsrGuide on RLM-GUID7b973d7b88a14c7e912409316fdbc41c.pdf

  • What is your exact card model? Run this:

    tw_cli /c0 show model

    You can't change the stripe size on an already created array. If it is a big hassle to recreate the array, then don't bother. Stripe size of 256K is ok and increasing to 1MB won't give a "huge" performance increase.

  • @mikeg said:
    What is your exact card model? Run this:

    tw_cli /c0 show model

    You can't change the stripe size on an already created array. If it is a big hassle to recreate the array, then don't bother. Stripe size of 256K is ok and increasing to 1MB won't give a "huge" performance increase.

    http://prntscr.com/1zkpei

    Yes I belive recreating the raid will be a huge paid as 44tb of datas need to be moved to some other machine and then move back again.

    And also if it really boost performance then i have to do the same for 6 machine of same setup.

    And I do have ordered 3 more machine with the same setup. So shall i make the strip size to maximum in the new machine for my needs?

  • AnthonySmithAnthonySmith Member, Patron Provider

    yes raid 6 would be slow compared to raid 10, but raid 6 SSD cached will give better performance and more stroage than raid 10.

    It seems you have spent quite a lot of money without really knowing much about your setup?

  • AnthonySmith said: It seems you have spent quite a lot of money without really knowing much about your setup?

    Well yes moving from RAID 6 setups to RAID 10 setups bring me a huge boost in expense .

    Paying more than 10k euros for machine i have , well are you sure RAID 6 with SSD caching will improve performance . I read for one month and decide to move to RAID 10 for best performance. And also most people said ssd caching wont work in my case as there are many hot files in my system of big size.

  • I would then try the bigger stripe size on one of your newer machines to see if it gives you much of an increase in performance. I can't find any info on max stripe size, all it says on LSI site is "Variable stripe size for performance tuning by application". LSI have updated their website yet again and it only lists the current controllers they offer... You might be able to find it here:

    http://www.lsi.com/support/Pages/download-results.aspx?productcode=P00033&assettype=0&component=Storage Component&productfamily=Legacy RAID Controllers&productname=3ware 9650SE-24M8

    But finding any specific info on their site can be a bit of a nightmare.

  • AnthonySmithAnthonySmith Member, Patron Provider

    That's fine if you have done your research then you obviously know what suites you best.

    It will greatly improve write performance yes. but it really depends what you need.

    SSD cached raid 60 would again be much faster than raid 10 native SATA in every aspect and probably give you roughly the same amount of storage.

  • @AnthonySmith said:
    That's fine if you have done your research then you obviously know what suites you best.

    It will greatly improve write performance yes. but it really depends what you need.

    SSD cached raid 60 would again be much faster than raid 10 native SATA in every aspect and probably give you roughly the same amount of storage.

    As i need lots of write speed like read speed i made a move to RAID 6 of 12x4tb to RAID 10 of 24x4b to compensate space too.

    And thanks for all your help in expanding my brain with infos (@mikeg and @AnthonySmith )

    And I have few more questions . Does this verification process which running now will happen all time?

    And why this is happening ? And I see it is taking more time to move 1% lets say 1+ hour for 1 percent even more than that.

    @mikeg said:
    I would then try the bigger stripe size on one of your newer machines to see if it gives you much of an increase in performance. I can't find any info on max stripe size, all it says on LSI site is "Variable stripe size for performance tuning by application". LSI have updated their website yet again and it only lists the current controllers they offer... You might be able to find it here:

    http://www.lsi.com/support/Pages/download-results.aspx?productcode=P00033&assettype=0&component=Storage Component&productfamily=Legacy RAID Controllers&productname=3ware 9650SE-24M8

    But finding any specific info on their site can be a bit of a nightmare.

    Well yes I will give a try with new machine which I am going to receive and also are you sure it wont degrade the performance than what I have ?

    What are the performance it will increase?

  • SSD Caching would be good if there were a lot of random reads/writes, but I'm not sure it would offer much of an improvement for large sequential reads/writes like in this array.

  • mikegmikeg Member
    edited October 2013

    @filegrasper said:

    Check out this summary on raid consistency checks, it should answer all your questions:

    http://www.thomas-krenn.com/en/wiki/RAID_Consistency_Check

    Also, increasing the stripe size for your functions will not degrade the performance, it should increase it.

  • AnthonySmithAnthonySmith Member, Patron Provider

    Well if you put flashcache on (I forgot the the specific name of the option) so lets call it write ahead, i.e. all writes are done to the SSD first then sent to the storage array and you had 2 x ssd drives in raid 0 running as the front end cache the seqential would be a massive improvement over standard sata raid 10.

    so lets say you have 24 drive bays

    22 x 4TB in raid 6 would give you between 65 and 70TB of usable storage. and the create a raid 0 array with 2 x 256GB SSD's install flachecache in write ahead mode and you will get initial sequential write speeds of between 600 - 800 MB/s maybe more.

    That is probably one of the options I would have experimented with as it also gives you the double parity and increased speed of the sata array with the +0 on the 6.

  • AnthonySmithAnthonySmith Member, Patron Provider

    Also if you can afford to reinstall then the benefit of using max stripe size on that array would be significant 6 months down the line when the array fills up more and is under more load.

  • My system comes with 24 bay + additional 2 hdd space.

    My setups are 24x4tb raid 10 with 2x120gb raid 1 for OS. Well I can even try your suggestion in one of my new machine.

    I think the setup is much difficult isnt it ? Wont it cause problem in future ?

  • AnthonySmithAnthonySmith Member, Patron Provider

    You have so many options it is insane :)

    Just stick with what you are happy to manage, if it is getting to complicated do not introduce a flash cache.

    If you want help and are happy to reinstall I am happy to help you out but obviously would need access to the server.

    Ant.

  • @AnthonySmith said:
    Well if you put flashcache on (I forgot the the specific name of the option) so lets call it write ahead, i.e. all writes are done to the SSD first then sent to the storage array and you had 2 x ssd drives in raid 0 running as the front end cache the seqential would be a massive improvement over standard sata raid 10.

    so lets say you have 24 drive bays

    22 x 4TB in raid 6 would give you between 65 and 70TB of usable storage. and the create a raid 0 array with 2 x 256GB SSD's install flachecache in write ahead mode and you will get initial sequential write speeds of between 600 - 800 MB/s maybe more.

    That is probably one of the options I would have experimented with as it also gives you the double parity and increased speed of the sata array with the +0 on the 6.

    We actually use this exact method on our storage arrays (flashache) with 2x SSD Drives in RAID0 as a cache for a large RAID10 array. I chose this because of the random read/write performance increase as we host VPS's and they won't tend to be used for large sequential access. I would be interested to see what kind of throughput a RAID6 22 x 4TB would produce with a bbu and cache and write back enabled. I would say it was around 800MB/s by itself. I'll be configuring our new storage servers soon with about this many 2TB drives so might give it a test. We will also be using a PCI-e SSD card for a cache, so i'm hoping for 1GB/s and 70,000+ IOPS :)

  • @filegrasper said:
    Does this verification process which running now will happen all time?

    You might have a weekly/monthly/whatever scheduled verify task - check with:

    tw_cli /c0 show verify
    tw_cli /c0 show selftest

Sign In or Register to comment.