Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


RAID Card and ESXi
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

RAID Card and ESXi

I have a Supermicro server that I originally had built to be colocated in a datacenter, and then ended up having it shipped back to me, and I'm now using it as a backup server, running ESXi 6.

Per the original server specs that I was given, the RAID card in there is a LSI 9220-8i card, but it seems like the array is very, very slow with that card, and I've seen some other reports of slow arrays with this (or similar) cards with ESXi.

I have another 2 servers with Adaptec 6405E cards, and those seem to work nicely.

Does anyone know if this is a known issue with those LSI cards, and if it's worth swapping out the card for a 6405E (or similar), assuming that it shouldn't be too complicated to do?

TIA!

Comments

  • IIRC, 9220-8i has no on-board cache. That's a big problem, because unlike any other OS, ESXi does not do disk-caching...

    Thanked by 1isaacl
  • @Jarry said:
    IIRC, 9220-8i has no on-board cache. That's a big problem, because unlike any other OS, ESXi does not do disk-caching...

    Good point...

    I wonder if that's what's causing the issue.

    It seems like the 6405E has onboard cache, which might explain why it's been working so much better for me.

    The 9220-8i still seems to cost more for some reason, but I probably may as well order a 6405E, unless there's a newer card that's more worthwhile for the price...

  • mik997mik997 Member
    edited October 2016

    I had a similar issue with ESXi 6 and a B120i onboard controller in a HP server - terrible I/O performance

    turned out to be the hpvsa driver - downgrading to the ESXi 5.5 vib resolved the issue

    might be something to look at ..

    Thanked by 1isaacl
  • isaaclisaacl Member
    edited October 2016

    @mik997 said:
    I had a similar issue with ESXi 6 and a B120i onboard controller in a HP server - terrible I/O performance

    turned out to be the hpvsa driver - downgrading to the ESXi 5.5 vib resolved the issue

    might be something to look at ..

    Will check that, thanks!

    Though that seems to be a HP specific driver...

    And I think it wasn't great before I upgraded to ESXi 6 either, don't remember for sure though...

  • How slow are you talking about?

    I'm have suspicions about my raid setup as well. Like on my raid 10 setup, in my ESXi 6 VM I get around 80-120MB/s

    Thanked by 1isaacl
  • isaaclisaacl Member
    edited October 2016

    Unsure, but I'm running 4 x 4TB drives in a RAID 10 array, and I was trying to deploy a local VDP instance for replication, and it's impossible to do unless I initially deploy thin disks (which I later found that they recommend anyway), and then when I try to inflate each VMDK (I think each one is around 1 TB), it takes a few days, and pretty much locks up the server (though the Client) from doing anything else, or at least takes a really long time to load.

    Have to test my speed later, but it's bad.

  • Wow
    ...

    No, I have 4x3 TB on a hetzner server with LSI raid cards (I think) It's not that bad, never even close...

    (I thought they recommended thick disks for performance...)

    Thanked by 1isaacl
  • Saw a recommendation somewhere (not finding the source now) to initially deploy VDP using thin disks, so that finishes quicker, and then to inflate the VMDKs later, but this way, the initial deployment finishes faster, plus I also had issues with the deployment not completing when I used thick disks, though it might also have had something to do with the internet connection for the backups server.

    My main servers, currently by Incero, are using the Adaptec 6405e cards, and they've been working very nicely, so I'm hoping it's just the card...

  • Just ordered a 6405e from eBay, hoping I can just swap it out, though I'll obviously have to rebuild the array...

  • Just to Necro my own topic -

    I got the 6405e, replaced it (now have a LSI 9220-8i that I have to do something with), initialized the RAID 10 array with the Clear option.

    Did the same process, with deploying a VDP instance, and then adding 4TB of storage using think disks (I was going to try directly deploying with thick disks, but figured this would be faster and not need the vCenter server after the initial deployment, plus I could compare the inflate times).

    Just tried inflating a ~1 TB VMDK using the vSphere client (didn't see the option in the VMware Host Client), and it says that it has a bit less than 2 hours left, and the numbers keep going down, and browsing the datastore works while that's running as well.

    So seems like the switch was a success - not sure why the old card/array were so bad, but I now have a LSI 9220-8i card that I need to do something with.

    Thanks everyone (especially @Nomad) for your help and input!

    Thanked by 1Nomad
  • Glad to hear that you sorted it out. If I was of any help, that's nice to hear too. (:

    Weird though, I thought 9220-8i was the better card here.

  • Dunno, the Adaptec cards have previously worked well for me with ESXi, might be an OS compatibility thing, but as long as this works, it doesn't matter that it's not the "better" card...

  • And in case anyone is wondering, I think the info I was going on was based on this topic I found a while back: https://communities.vmware.com/message/2012333

    As @Jarry mentioned, the issue seems to be with the 9220-8i not having any onboard cache, as opposed to the Adaptec card, which has onboard cache.

    Need another few cards now, I'm guessing I need to find something with onboard cache, just not sure what's best...

  • ClouviderClouvider Member, Patron Provider

    9271-4i or 8i if you need 8 drives works cool.

    Add to this Cachevault if you want safer write back operation.

  • @Clouvider said:
    9271-4i or 8i if you need 8 drives works cool.

    Add to this Cachevault if you want safer write back operation.

    I'm using IBM M5016 (which is actually re-branded LSI 9271/9266). I'm flashing it with original LSI-bios and using LSI-tools.

    I got a few of them quite cheap on eBay some time ago and I can confirm they work very well with ESXi. BTW M5016 comes with SuperCap per default (for LSI-9271 you have to order it separately)...

  • Thanks all, going to see what I can find.

  • I can get an Adaptec 5405 for much cheaper, question is if I gain much with the LSI cards...

    Thanks.

  • Looking now at the IBM M5015, which is the rebranded 9260-8i, question is if that would work better with ESXi than the 6405e's, and I'm seeing that they run hot, just not sure if that's a reason not try one...

  • And the cheap 5405 that I got doesn't want to play nice with ESXi 6.x (6.5 won't even get past loading the storage stack on the installer, and seems to be crashing with 6.0 as well).

    Guess it's time to try LSI/IBM.

  • zrunnerzrunner Member
    edited November 2016

    @isaacl said:
    And the cheap 5405 that I got doesn't want to play nice with ESXi 6.x (6.5 won't even get past loading the storage stack on the installer, and seems to be crashing with 6.0 as well).

    Would you be running the latest drivers for adaptec?


    I did a imagebuild like a month ago (last patchset) and at the same time i took the time to update the image/iso/offline bundle with all updated drivers.




    Three of my servers kept doing PSOD under/after the update, two of them are located locally so i went there with a screen and keyboard hooked up.. and well to somewhat shorten down my story, after reading the release notes for the adaptec drivers i found it stated something along the lines that cards under the 6000 series no longer was supported, i re-did my image with drivers 1 or 2 versions back and it was all good again.




    my adaptec cards 2405, 5805 and 5805z, all on 6.0u2

  • @zrunner - thanks a lot!

    I bundled this driver with an ISO of the latest 6.0 build:

    https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI60-PMC-AACRAID-62141024&productId=491

    Any idea which driver/version you used?

    If you got it working with the 5805, that driver should work with the 5405 as well (unless there's an issue with the card)...

    Thanks a lot!

  • zrunnerzrunner Member
    edited November 2016

    @isaacl said:

    >

    Yeah just logged into one of the boxes to check the vib and that's the one i ended up running, so newer drivers removed support for old cards.

    Edit: just checked the download history i actually went back a lot of driver versions was 4 versions, but yeah thats the one with old cards supported seems like when they hit version 6.2.1.5 old cards got removed.

    6.2.1.50629 = first .5 release, old removed.


    6.2.1.41024 = last .4 release, old working.

  • Still having issues with that driver on my end, unless it's because I updated the firmware on the card as well?

    And any clue where you got the driver from?

    Thanks.

  • zrunnerzrunner Member
    edited November 2016

    I'm running the latest bios on all my cards, which actually has the same bios on them.


    Ver. 5.2.0 Build 18948


    I got my adaptec driver from vmware i am using the one from your URL.

    Edit:
    What bugs me is both my 5805(z) cards have trouble sometimes to initializing


    so on a reboot sometimes it fails with some bogus error like "couldn't load bios" or something and have to reboot so i hate when there is a esxi update that forces me to reboot as it might fail and have to reboot like 2-4 times for card to start up and prolonging the boot process and having to login with ilo/idrac/kvm etc just to do ctrl+alt+del.


    And well they run hot as hell, should be active cooling on those cards.

  • Guess I'll try again, thanks a lot!

    And you don't happen to still have the image you used, right?

  • zrunnerzrunner Member
    edited November 2016

    I do, i use ESXi Customizer to build my images, i then 'build' one ISO (.iso), one offline-bundle (.zip) for each: HP system, Dell system, and one for the rest 'standard'

  • Did the same, just wondering if I messed something up with the ISO I made/used.

    If you only added the Adaptec driver, any chance you can upload your standard files, so I can try them with my server?

    Thanks a lot!

  • zrunnerzrunner Member
    edited November 2016

    Sure got any place i could/should upload it?

    I do have a lot of other "crap" added to my isos.
    Could list what i remember and know i added by reading the vibs in the iso :)


    Shouldnt be a issue as most of them are just updated drivers, some are added.


    a lot of default drivers updated to latest available.


    remote console 9.0 windows.


    latest vmware tools.


    ocz drivers.


    additional ahci controller support (xahci)


    qnap nfs/iscsi drivers.


    skge network drivers.


    various realtek network drivers.


    nforce network drivers.


    atheros network drivers.


    sky2 network drivers.


    latest Host UI.


    Synology nfs/iscsi drivers.


    Latest AMD/Intel CPU microcodes.

  • Nice, thanks!

    I assume any extra drivers shouldn't get in the way.

    WeTransfer.com should work to upload the file(s).

    Also opened a ticket with Adaptec to see what they say, they mentioned that I can try the series 6 driver (http://storage.microsemi.com/en-us/speed/raid/aac/linux/aacraid_vmware_drivers_1_2_1-52011_cert_tgz.php), but I'm pretty sure that won't work, replied, and will see what they say back...

    Thanks a lot!

Sign In or Register to comment.