Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


LSI MegaRAID sucks?!
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

LSI MegaRAID sucks?!

letboxletbox Member, Patron Provider
edited January 2013 in General

Hay everyone!

I'm run some test on LSI MegaRAID 9271-8iCC RAID card w/CacheCase Pro 2.0 by using Samsung 256GB 840 Pro as SSD cache
and Raid 10 4x Seagate Constellation ES 2 TB , what i get?

[root@xxxx]# ./ioping . -c 10

4096 bytes from . (ext4 /dev/sda2): request=1 time=0.1 ms
4096 bytes from . (ext4 /dev/sda2): request=2 time=0.1 ms
4096 bytes from . (ext4 /dev/sda2): request=3 time=0.7 ms
4096 bytes from . (ext4 /dev/sda2): request=4 time=0.8 ms
4096 bytes from . (ext4 /dev/sda2): request=5 time=0.9 ms
4096 bytes from . (ext4 /dev/sda2): request=6 time=0.7 ms
4096 bytes from . (ext4 /dev/sda2): request=7 time=0.9 ms
4096 bytes from . (ext4 /dev/sda2): request=8 time=0.8 ms
4096 bytes from . (ext4 /dev/sda2): request=9 time=0.9 ms
4096 bytes from . (ext4 /dev/sda2): request=10 time=0.7 ms

--- . (ext4 /dev/sda2) ioping statistics ---
10 requests completed in 9008.5 ms, 1537 iops, 6.0 mb/s
min/avg/max/mdev = 0.1/0.7/0.9/0.3 ms

[root@xxxx]# ./ioping . -R

--- . (ext4 /dev/sda2) ioping statistics ---
15164 requests completed in 3000.1 ms, 7108 iops, 27.8 mb/s
min/avg/max/mdev = 0.0/0.1/3.4/0.1 ms

how it can be possible very slow and i using SSD cache? i get better from busy server SAS Raid10 ,maybe i missed something at installing?

«1

Comments

  • concerto49concerto49 Member
    edited January 2013

    Nick_A (RamNode) had issues with Samsung 840 Pro and LSI 9271 too. Looks like firmware / incompatibility issues. He changed back to Samsung 830 and it was fixed.

  • @concerto49 said: Nick_A (RamNode) had issues with Samsung 840 Pro and LSI 9271 too. Looks like firmware / incompatibility issues. He changed back to Samsung 830 and it was fixed.

    Nice! This is bad news for my current build!

  • Without sounding patronising, everything configured correctly?

  • Forgive me if I didn't read, but do you have a BBU & Write cache enabled?
    What about stripe size?

  • letboxletbox Member, Patron Provider

    @ShardHost said: Without sounding patronising, everything configured correctly?

    I reconfigured it 10 times using all possible way to get better.

  • letboxletbox Member, Patron Provider

    @BradND said: Forgive me if I didn't read, but do you have a BBU & Write cache enabled?

    What about stripe size?

    Write cached is enabled, I used 1MB and 256K Stripe size.

  • concerto49concerto49 Member
    edited January 2013

    @ShardHost said: Without sounding patronising, everything configured correctly?

    @BradND said: Forgive me if I didn't read, but do you have a BBU & Write cache enabled?

    What about stripe size?

    It seems to be this particular card and Samsung 840 Pro in particular. No need to try play with configurations. It's all been done. Nick even tried different drives and eventually gave up.

    http://www.webhostingtalk.com/showthread.php?t=1226776 - here's the thread discussing it just to say that I didn't make it up.

  • Unrelated to your problem (but just for interest) did you go for BBU or cachevault?

  • @concerto49 said: It seems to be this particular card and Samsung 840 Pro in particular. No need to try play with configurations. It's all been done. Nick even tried different drives and eventually gave up.

    I certainly dodged a bullet there then. Just had all the quotes in for this node and was planning to purchase Monday. Not one system builder mentioned it.

  • letboxletbox Member, Patron Provider

    @ShardHost said: Unrelated to your problem (but just for interest) did you go for BBU or cachevault?

    cachevault

  • miTgiBmiTgiB Member
    edited January 2013

    @concerto49 said: It seems to be this particular card and Samsung 840 Pro in particular. No need to try play with configurations. It's all been done. Nick even tried different drives and eventually gave up.

    I get similar results with Sansung 840 (not pro) and 12x Toshiba 2tb SAS2 with 9271-4i with cachecade and cachevault on raid10 array 64k stripe.

    Granted there are a few active VPS on the node

    [root@e5clt20 ~]# ioping . -c 10
    4096 bytes from . (ext4 /dev/sda3): request=1 time=0.1 ms
    4096 bytes from . (ext4 /dev/sda3): request=2 time=0.2 ms
    4096 bytes from . (ext4 /dev/sda3): request=3 time=0.2 ms
    4096 bytes from . (ext4 /dev/sda3): request=4 time=0.7 ms
    4096 bytes from . (ext4 /dev/sda3): request=5 time=0.8 ms
    4096 bytes from . (ext4 /dev/sda3): request=6 time=0.9 ms
    4096 bytes from . (ext4 /dev/sda3): request=7 time=0.8 ms
    4096 bytes from . (ext4 /dev/sda3): request=8 time=0.9 ms
    4096 bytes from . (ext4 /dev/sda3): request=9 time=0.9 ms
    4096 bytes from . (ext4 /dev/sda3): request=10 time=0.9 ms
    
    --- . (ext4 /dev/sda3) ioping statistics ---
    10 requests completed in 9007.5 ms, 1594 iops, 6.2 mb/s
    min/avg/max/mdev = 0.1/0.6/0.9/0.3 ms
    [root@e5clt20 ~]# ioping . -R
    
    --- . (ext4 /dev/sda3) ioping statistics ---
    19873 requests completed in 3000.0 ms, 11817 iops, 46.2 mb/s
    min/avg/max/mdev = 0.0/0.1/1.5/0.0 ms
    
  • concerto49concerto49 Member
    edited January 2013

    @miTgiB said: I get similar results with Sansung 840 (not pro) and 12x Toshiba 2tb SAS2 with 9271-4i with cachecade and cachevault on raid10 array 64k stripe.

    The IO isn't particular high, but I don't get these terrible results with Samsung 840 Pro and LSI 9266, so the problem seems to be tied to LSI 9271.

  • @concerto49 said: The IO isn't particular high, but I don't get these terrible results with Samsung 840 Pro and LSI 9266, so the problem seems to be tied to LSI 9271.

    Using CC on it?

  • concerto49concerto49 Member
    edited January 2013

    @ShardHost said: Using CC on it?

    Yes and there are a few active VPS on it too, but there are differences in setup I'm sure so I can't say that because my 9266 works, yours will.

  • @concerto49 said: Yes and there are a few active VPS on it too.

    Back to the drawing board for SSD choices

  • Was about to buy exact same SSD and LSI for next build

    /subscribe

  • Nick_ANick_A Member, Top Host, Host Rep

    All I can say is you're welcome. I'm still hoping it was maybe the individual card or maybe I got 8 bad 840 Pros in a row, but the problem was completely erased with 830s. I'm about to replace another node with 830s in a few minutes. I think it will be conclusive if this also fixes the problem.

  • @Nick_A said: All I can say is you're welcome. I'm still hoping it was maybe the individual card or maybe I got 8 bad 840 Pros in a row, but the problem was completely erased with 830s. I'm about to replace another node with 830s in a few minutes. I think it will be conclusive if this also fixes the problem.

    That really sucks. Have you contacted either vendor about the issue?

  • letboxletbox Member, Patron Provider

    @Nick_A said: All I can say is you're welcome. I'm still hoping it was maybe the individual card or maybe I got 8 bad 840 Pros in a row, but the problem was completely erased with 830s. I'm about to replace another node with 830s in a few minutes. I think it will be conclusive if this also fixes the problem.

    Please keep us update.

  • Nick_ANick_A Member, Top Host, Host Rep

    LSI says they should be compatible. LSI has also taken back support statements made to me before, so who knows.

  • Nick_ANick_A Member, Top Host, Host Rep

    Coincidentally, if anyone wants to buy about 8 128GB 840 Pros, I know someone with extras...

  • If only they were bigger. I would have had another use for them! ;)

  • letboxletbox Member, Patron Provider

    @ShardHost said: If only they were bigger. I would have had another use for them! ;)

    256GB 840 pro is here :D

  • @key12 said: 256GB 840 pro is here :D

    All about price :)

  • letboxletbox Member, Patron Provider
    edited January 2013

    @ShardHost said: All about price :)

    I get a big _!- , I paid about $350 for it :(

  • Nick_ANick_A Member, Top Host, Host Rep

    @concerto49 said: Yes and there are a few active VPS on it too, but there are differences in setup I'm sure so I can't say that because my 9266 works, yours will.

    Does a dd test create a huge CPU load and basically lock the node up?

  • @Nick_A said: Does a dd test create a huge CPU load and basically lock the node up?

    Not at the moment, but it is very interesting in that a dd test on / gets almost double the IO compared to a dd test on /vz. That's the only "issue" I've found so far on it, which is strange.

  • LSI does suck. We had their engineering team engaged on poor performance with 8x SSD on a 9285-8e and it couldn't do over 500mb/sec after 2 weeks of tuning.

    Switched to the older 9750-8 (3ware gen) and performance is much better.

    Some basic tips - you definitely want a newer kernel so you can distribute interrupts, use a modern fs, etc.

  • To the best of my knowledge, hardware RAID does not support TRIM yet, so SSDs that do a lot of writes & deletes should degrade in performance fairly quickly. Not sure if that's what happening here.

  • @jon617 said: hardware RAID does not support TRIM yet, so SSDs that do a lot of writes & deletes should degrade in performance fairly quickly. Not sure if that's what happening here.

    That's not what's happening here. It would have been years ago. SSDs long have had measures to survive even without TRIM, e.g. garbage collection. Sure performance isn't as good, but we're talking about IO going down to a few mb here as opposed to hundreds of mb. It doesn't get that bad.

Sign In or Register to comment.