Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How do you test SSD caching?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How do you test SSD caching?

What're the various ways in which you can test the performance of SSD cache? Hardware specs are as follows;

CPU Xeon E5-2620 (6 Cores, 12 Threads, 2.5GHz Turbo)
2 x 240 GB SSD
4 x 4 TB SATA III
128 GB DDR3 RAM ECC
LSI Hardware RAID Controller

Also what do you guys think of this spec for a host node?

«1

Comments

  • DD and ioping test obviously. Result should be higher than typical RAID 10 setup.

  • @budi1413 said:
    DD and ioping test obviously. Result should be higher than typical RAID 10 setup.

    Are these any good;

    [root@server ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.90545 s, 370 MB/s
    
    [root@server ~]# ioping -c 10 /dev/sda
    4.0 kb from /dev/sda (device 7451.0 Gb): request=1 time=321.8 ms
    4.0 kb from /dev/sda (device 7451.0 Gb): request=2 time=13.6 ms
    4.0 kb from /dev/sda (device 7451.0 Gb): request=3 time=9.9 ms
    4.0 kb from /dev/sda (device 7451.0 Gb): request=4 time=19.1 ms
    4.0 kb from /dev/sda (device 7451.0 Gb): request=5 time=13.1 ms
    4.0 kb from /dev/sda (device 7451.0 Gb): request=6 time=16.8 ms
    4.0 kb from /dev/sda (device 7451.0 Gb): request=7 time=15.1 ms
    4.0 kb from /dev/sda (device 7451.0 Gb): request=8 time=9.1 ms
    4.0 kb from /dev/sda (device 7451.0 Gb): request=9 time=20.8 ms
    4.0 kb from /dev/sda (device 7451.0 Gb): request=10 time=29.1 ms
    
    --- /dev/sda (device 7451.0 Gb) ioping statistics ---
    10 requests completed in 9.5 s, 21 iops, 85.4 kb/s
    min/avg/max/mdev = 9.1 ms / 46.8 ms / 321.8 ms / 91.8 ms
    
  • @ultimatehostings what kind of SSD caching technology are you using?

  • @marcm

    The server has a hardware raid card with BBU + SSD Caching Ability not quite sure if this would answer your question, the server was setup by the provider, I only asked them to enable SSD caching with 2 x 240 GB SSD's.

  • @ultimatehostings - who is the provider / DC ?

  • @marcm said:

    Incero

  • @ultimatehostings ask Incero about CacheCade Pro 2.0

  • @marcm :

    I'm sure that it's a :LSI MegaRaid card, do you think the I/O should be better than what I posted?

  • @ultimatehostings no, it's right on, however SSD caching is not enabled.

  • I see, I'll definitely contact them.

  • @ultimatehostings said:
    I see, I'll definitely contact them.

    You can check using the megacli tools on if CacheCade is enabled.

  • ultimatehostings said: Are these any good;

    That ioping result look not good at all.

  • @budi1413 said:

    Thanks for the input, working on it to improvise.

  • budi1413 said: That ioping result look not good at all.

    That's because SSD caching isn't enabled for the RAID 10 array.

  • Maybe the instructions I provided were incorrect.

  • @Jack - You're assuming he is using FlasheCache.

  • SpeedyKVMSpeedyKVM Banned, Member

    @marcm said:
    ultimatehostings ask Incero about CacheCade Pro 2.0

    The card is LSI 9271-8i with Cachecade :)

    At @ultimatehostings reboot your machine, use the kvm and browse the lsi bios and configure the SSD caching however you want. When there are no specific details in an order of how to setup caching we usually setup caching in read only mode, so I don't think you would see an ioping improvement but would see a regular file serving improvement (I don't think it's going to cache a block that's been hit once!). Advanced users (who actually monitor the status/health of their ssds via megacli) might want to enable read and write caching. However if you enable write caching without monitoring ssd health then be prepared for raid failure when the SSDs (presumably in RAID 1) cache die at the near exact same time :-). With monitoring enabled you can ask us to swap out the SSDs one at a time before they fail. We've had a couple of customers using 120GB SSDs (bad idea) for caching in R/W mode and allowed them to both fail at the same time. The older lsi bios at the time didn't handle the event well.

    Feel free to open a ticket also, and someone can provide you with courtesy assistance on your unmanaged machine during regular hours to help you setup megacli, change your caching modes, etc.

  • SpeedyKVMSpeedyKVM Banned, Member
    edited November 2013

    p.s. awesome machine :-) way better spec than I see a lot of vps companies running (we see a ton of people using 2x 1TB s/w raid 1 or even 0!! for their VPS hosts).

  • I have nothing to complain with you guys, I got this sorted with the assistance and guidance of @marcm he's an awesome person.

    Thanked by 1marcm
  • a straight dd test will not invoke cachecade it's an algorithm so you would have to run many dd's across a file for it to finally get added to the cache just like any other cachecade technology. The problem is it will never cache /dev/random or other devices.

    Same goes for ioping.

  • SpeedyKVMSpeedyKVM Banned, Member

    @ultimatehostings said:
    I have nothing to complain with you guys, I got this sorted with the assistance and guidance of marcm he's an awesome person.

    I didn't think you were complaining at all! I'm just on watch here and it's quiet right now on thanksgiving night, so thought I would chime in and try to help. :-). If you need help in future just open a ticket, we're being more proactive in helping on unmanaged systems these days (during business hours when not busy).

  • Thanks your input is much appreciated.

  • I would like to thank @marcm for helping me setup SSD caching, he's a great guy and I do appreciate what's he done for me.

  • A bit improved results;

    [root@server ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.12772 s, 505 MB/s
    
    [root@server ~]# ioping -c 10 /dev/sda
    4096 bytes from /dev/sda (device 7451.0 Gb): request=1 time=19.3 ms
    4096 bytes from /dev/sda (device 7451.0 Gb): request=2 time=22.7 ms
    4096 bytes from /dev/sda (device 7451.0 Gb): request=3 time=8.3 ms
    4096 bytes from /dev/sda (device 7451.0 Gb): request=4 time=12.8 ms
    4096 bytes from /dev/sda (device 7451.0 Gb): request=5 time=15.7 ms
    4096 bytes from /dev/sda (device 7451.0 Gb): request=6 time=22.2 ms
    4096 bytes from /dev/sda (device 7451.0 Gb): request=7 time=12.9 ms
    4096 bytes from /dev/sda (device 7451.0 Gb): request=8 time=19.6 ms
    4096 bytes from /dev/sda (device 7451.0 Gb): request=9 time=9.7 ms
    4096 bytes from /dev/sda (device 7451.0 Gb): request=10 time=25.4 ms
    
    --- /dev/sda (device 7451.0 Gb) ioping statistics ---
    10 requests completed in 9170.4 ms, 59 iops, 0.2 mb/s
    min/avg/max/mdev = 8.3/16.9/25.4/5.6 ms
    
  • drserverdrserver Member, Host Rep

    this one will run great.

    Thanked by 1ryanarp
  • Thanks, will be KVM node.

  • Iops is still too low. :D

  • Shoaib_AShoaib_A Member
    edited November 2013

    @Incero said:
    p.s. awesome machine :-) way better spec than I see a lot of vps companies running (we see a ton of people using 2x 1TB s/w raid 1 or even 0!! for their VPS hosts).

    I think vps companies should choose Hardware Raid 1 or Software Raid 10 for some redundancy atleast.Software Raid 0 is the worst practice of them all and you hardly have any chance of recovering data in case the disk fails

  • SpeedyKVMSpeedyKVM Banned, Member
    edited December 2013

    @Sledger said:

    No doubt RAID 0 is a bad idea! We sell servers, not business plans. :-) People cutting corners is what happens when people rush to compete on price only (ala a ton of the vps industry). The good hosts will survive.

Sign In or Register to comment.