Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with OpenID
Advertise on LowEndTalk.com

In this Discussion

Flashcache + Ramdisk?

Flashcache + Ramdisk?

edanedan Member
edited February 2015 in General

Hello all,

Setting up flashcache and using ramdisk as the cache disk to speed up my /var partition. So far work great, got significant IO improvements especially the iops.

For the ramdisk size I set it to 2GB and the flashcache cache mode is using write-through (safest).

[[email protected] ~]# dmsetup table
var_cache: 0 5684107264 flashcache conf:
        ssd dev (/dev/ram0), disk dev (/dev/md2) cache mode(WRITE_THROUGH)
        capacity(1952M), associativity(512), data block size(4K)
        disk assoc(256K)
        skip sequential thresh(0K)
        total blocks(499712), cached blocks(462876), cache percent(92)
        nr_queued(0)
Size Hist: 1024:1 4096:8310442

Simple bencmark using ioping:


[[email protected] ~]# ioping -c 10 /var
4.0 KiB from /var (ext4 /dev/mapper/var_cache): request=1 time=12 us
4.0 KiB from /var (ext4 /dev/mapper/var_cache): request=2 time=21 us
4.0 KiB from /var (ext4 /dev/mapper/var_cache): request=3 time=21 us
4.0 KiB from /var (ext4 /dev/mapper/var_cache): request=4 time=21 us
4.0 KiB from /var (ext4 /dev/mapper/var_cache): request=5 time=21 us
4.0 KiB from /var (ext4 /dev/mapper/var_cache): request=6 time=21 us
4.0 KiB from /var (ext4 /dev/mapper/var_cache): request=7 time=21 us
4.0 KiB from /var (ext4 /dev/mapper/var_cache): request=8 time=21 us
4.0 KiB from /var (ext4 /dev/mapper/var_cache): request=9 time=20 us
4.0 KiB from /var (ext4 /dev/mapper/var_cache): request=10 time=20 us

--- /var (ext4 /dev/mapper/var_cache) ioping statistics ---
10 requests completed in 9.0 s, 50.3 k iops, 196.3 MiB/s
min/avg/max/mdev = 12 us / 19 us / 21 us / 2 us
[[email protected] ~]# ioping -RD /var

--- /var (ext4 /dev/mapper/var_cache) ioping statistics ---
218.9 k requests completed in 3.0 s, 214.1 k iops, 836.2 MiB/s
min/avg/max/mdev = 3 us / 4 us / 25 us / 0 us
[[email protected] ~]# ioping -RL /var

--- /var (ext4 /dev/mapper/var_cache) ioping statistics ---
18.1 k requests completed in 3.0 s, 6.7 k iops, 1.6 GiB/s
min/avg/max/mdev = 143 us / 150 us / 454 us / 4 us
[[email protected] ~]# ioping -RC /var

--- /var (ext4 /dev/mapper/var_cache) ioping statistics ---
5.0 M requests completed in 3.0 s, 1.8 M iops, 7.1 GiB/s
min/avg/max/mdev = 0 us / 0 us / 53 us / 0 us

Any one using this kind of solutions?

Tagged:

Comments

  • rds100rds100 Member
    edited February 2015

    This does not make your /var partition any faster, it just makes the benchmark look better. The ioping benchmark causes cache flush before attemting to read from the disk, normal application don't do this and it doesn't make sense to do this (flush the cache before reading). So don't bother, just use your normal partition and let Linux use the available RAM for caching as it wants. There is no point to f*ck with flashcache like this. Flashcache is for SSDs.

    -

    Thanked by 20xdragon ehab
  • Is that a 3 level cache? Ramdisk < --- > flashcache < --- > disk? If so, how are you handling the "caching" from ramdisk to flashcache?

    Combating spammers/trolls/crawlers/fraudsters? Try free Proxy / VPN / Bad IP Detection || You can find my other useful scripts on GitHub or contact me on Twitter

  • @rds100 said: This does not make your /var partition any faster, it just makes the benchmark look better. The ioping benchmark causes cache flush before attemting to read from the disk, normal application don't do this and it doesn't make sense to do this (flush the cache before reading). So don't bother, just use your normal partition and let Linux use the available RAM for caching as it wants. There is no point to f*ck with flashcache like this. Flashcache is for SSDs.

    I saw many people use write-through for their SSD cached (using flashcache) so with those configurations seems using Ramdisk can beat that.

    @black said: Is that a 3 level cache? Ramdisk < --- > flashcache < --- > disk? If so, how are you handling the "caching" from ramdisk to flashcache?

    From their docs:

    Writethrough - safest, all writes are cached to ssd but also **written to disk
    immediately.** If your ssd has slower write performance than your disk (likely
    for early generation SSDs purchased in 2008-2010), this may limit your system
    write performance. All disk reads are cached (tunable). 
    
    Writethrough and Writearound caches are not persistent across a device removal
    or a reboot. Only Writeback caches are persistent across device removals
    and reboots. This reinforces 'writeback is fastest', 'writethrough is safest'.

    So basically its like SSD < --- > flashcache < --- > disk on write-through mode.

  • As i said using the ram disk will not make your performance faster, it will make it slower (because some RAM is reserved for the ramdisk and cannot be used by the linux kernel for anything else). It only makes that one artificial benchmark look better, but the real world performance will not be better.

    -

  • edan said: So basically its like SSD < --- > flashcache < --- > disk on write-through mode.

    Wait, what? Isn't SSD is your flashcache? So why do you have it on there twice? Where's your RamDisk?

    Combating spammers/trolls/crawlers/fraudsters? Try free Proxy / VPN / Bad IP Detection || You can find my other useful scripts on GitHub or contact me on Twitter

  • edanedan Member
    edited February 2015

    @rds100 said: As i said using the ram disk will not make your performance faster, it will make it slower (because some RAM is reserved for the ramdisk and cannot be used by the linux kernel for anything else). It only makes that one artificial benchmark look better, but the real world performance will not be better.

    Interesting but why there are some people using SSD for write-through mode, the only benefit for SSD is persistent data across reboot and this is only for write-back mode.

    Edit: perhaps the cache size :)

    @black said: Wait, what? Isn't SSD is your flashcache? So why do you have it on there twice? Where's your RamDisk?

    Ah I though you point flashcache as the software and not the devices, the flashcache is the ramdisk.

    flashcache (ramdisk) < --- > disk.

  • rds100rds100 Member
    edited February 2015

    edan said: Interesting but why there are some people using SSD for write-through mode, the only benefit for SSD is persistent data across reboot and this is only for write-back mode.

    This is wrong. If you use write-back mode and there is unexpected power loss or non-graceful restart you have data loss. Write-back mode is unsafe for the data. Yes it might make the benchmarks look better, but at the cost of potential data loss. Write-through and write-around modes are safe, i.e. even if there is power loss there is no data loss. At least there is no data loss caused by flashcache.

    -

  • edan said: Ah I though you point flashcache as the software and not the devices, the flashcache is the ramdisk.

    So I'm assuming the setup you have is SSD <---> RAM <---> HDD

    It doesn't make much sense. If you're reading a file from HDD for processing, it goes from HDD to RAM, then to SSD. If it's going from HDD to RAM, then the file can be processed, why do an extra step to read it into SSD?

    If you're writing a file, why are you writing to an SSD instead of RAM? RAM is much faster. If the file is large that RAM can't hold, then the bottleneck would still be SSD <---> HDD, so why use RamDisk at all?

    There's a reason why L1 cache is faster than L2, and L2 cache is faster than L3.

    Combating spammers/trolls/crawlers/fraudsters? Try free Proxy / VPN / Bad IP Detection || You can find my other useful scripts on GitHub or contact me on Twitter

  • FalzoFalzo Member
    edited February 2015

    @black: you probably missed something here. he doesn't use any ssd at all. he uses a portion of his ram as ramdisk instead of a ssd to do caching. flashcache is the software he uses to manage this: https://github.com/facebook/flashcache/

    @edan: sometimes ago I played around with various settings in a setup like yours, trying out on flashcache, enhanceio an rapiddisk. turned out that's not worth the hassle after all... as @rds100 already mentioned in best case it just makes your benchmark look better. in real world 2 GB of caching this way probably wont improve real much over caching already done by the kernel...

    Netcup DE KVM: 1vC 1GB - 18,88€ or 2 Core 3GB 240GB - 54,88€ yearly /w 5€ off: 36nc15125155661 - 36nc15125155667
    UltraVPS.eu KVM in US/NL/DE, BLACK FRIDAY: 1GB 20€ or 2GB 40€ yearly or cheap 750G / 2TB storage offers

  • Falzo said: @black: you probably missed something here. he doesn't use any ssd at all. he uses a portion of his ram as ramdisk instead of a ssd to do caching.

    Oh that makes a lot more sense then. I went "wtf?" pretty hard.

    Combating spammers/trolls/crawlers/fraudsters? Try free Proxy / VPN / Bad IP Detection || You can find my other useful scripts on GitHub or contact me on Twitter

  • Actually, I wonder whether this could be useful for the following scenario:

    I'm looking for a way to maximise read/write performance for a partition for large-ish temporary files. There's a mix of sequential and random read/writes. As they're temporary, I don't really care if they go missing/corrupt due to power failure etc.
    tmpfs is limited by the amount of RAM you have. I've also considered using an ext4 partition with journaling disabled.

    Whilst the OS will cache reads normally, I suspect that it'll generally try to persist writes as soon as it gets them. Sticking a ramdisk cache (using a "dangerous" write mode like writeback) in the middle could perhaps speed up bursty random writes.

    Good/bad idea?

  • Linux file system cache already does both write-back and read caching. Of course you can force the wirtes to disk (sync / fsync) and you can force total cache flush, but normally you don't do this.

    -

  • xyzxyz Member
    edited February 2015

    @rds100 said: Linux file system cache already does both write-back and read caching. Of course you can force the wirtes to disk (sync / fsync) and you can force total cache flush, but normally you don't do this.

    Good point, I forgot about that. Probably better to tune write caching. Although the problem is that it probably applies to all partitions and not just one.

  • perennateperennate Member, Provider

    If you want fast IO speed, just get lots of RAM and install root partition in RAM -- http://reboot.pro/topic/14547-linux-load-your-root-partition-to-ram-and-boot-it/

  • @rds100 Seems you are misunderstood my comments. I know the drawback for each caching mode :)

    @Falzo in my current setup the latency increased dramatically, /var partition contains the database and it pretty helpful. This is the same machine with this box running OwnCloud http://lowendtalk.com/discussion/38491/hetzner-review-i7-3770-16gb-ram-2-x-3tb-26-5-euro-month-with-vat-31-euro-month.

Sign In or Register to comment.