Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Advertise on LowEndTalk.com
Dedicated server with NVMe Caching raid 1/10
New on LowEndTalk? Please read our 'Community Rules' by clicking on it in the right menu!

Dedicated server with NVMe Caching raid 1/10

Does this configuration make sense? I don't see a lot of ssd/NVMe caching solutions. It used to be prominent back in 2011ish when SSDs were expensive yet accessible but frankly I still consider them expensive in 2018 for large file storage.

So what's the down side to a 120gb NVMe Caching a server in raid 1 with 2x drives at 4tb or raid 10 with 4x drives at 8tb?

Enterprise grade spindle drives have never been cheaper. So it feels like a nice balance especially with raid 10 where you have high sustained transfer speeds and the cache can handle the quick small database access.

So why don't we see this more regularly? Or is this not considered LET enough ;)

P.S. of course you could also get 32gb of extra ram only for data caching, but with current ram pricing that seems cost prohibitive vs NVMe Caching.

What's the real world gains here? I know @Francisco had considered it for his block storage solution that we're all waiting for? Perhaps he can give us a bit of insight into how it performs in large scale deployment?

Comments

  • First-RootFirst-Root Member, Provider

    Its pretty common for ceph deployments to place journal on an highend ssd / nvme. The journal itself could be compared to a write cache.

  • @FR_Michael said:
    Its pretty common for ceph deployments to place journal on an highend ssd / nvme. The journal itself could be compared to a write cache.

    Sure i understand that kind of setup, but I mean more like overall intelligent caching. Similar to SSHDs and exactly like ssd caching on consumer PC setups.

  • jackbjackb Member, Provider
    edited March 2018

    @sureiam said:

    So what's the down side to a 120gb NVMe Caching a server in raid 1 with 2x drives at 4tb or raid 10 with 4x drives at 8tb?

    The down side is your ssd is ~0.7% of your available space (assuming 4x8TB spinners). Depending on how much of your data is hot, you might see frequent evictions / cache thrashing.

    3-10% coverage of available space is a nicer number, but more expensive. Also worth considering how you swap the nvme. Is it u2 in a hotswap bay? If so - no problem. Otherwise, you probably need to unplug the server to swap it.

    Afterburst - Awesome OpenVZ&KVM VPS in US+EU

  • FHRFHR Member, Provider

    32GB of RAM for caching is not viable. Additionally, it would have to be just for read operations. Caching write ops in RAM would be a terrible idea… What if the server crashed for any reason? A major data loss.

    SkylonHost - affordable hourly-billed KVM VPS in Prague, CZ!
    Featuring own high performance network AS202297 | RIPE NCC member | Contact us for IPs/ASNs

  • FranciscoFrancisco Top Provider

    sureiam said: What's the real world gains here? I know @Francisco had considered it for his block storage solution that we're all waiting for? Perhaps he can give us a bit of insight into how it performs in large scale deployment?

    Honestly? We don't know. We're putting in a fair bit of cache into each node and have either 128GB or 256GB RAM just for page caching.

    We don't know if this will be enough, but since we're going to be rate limiting each users available MB/sec and IOPs as well as running mirrored volumes, we're hopeful that'll be enough to keep it running smooth.

    If it doesn't, we'll see about adding additional cache.

    Francisco

    BuyVM - Free DirectAdmin, Softaculous, & Blesta! / Anycast Support! / Windows 2008, 2012, & 2016! / Unmetered Bandwidth!
    BuyShared - Shared & Reseller Hosting / cPanel + Softaculous + CloudLinux / Pure SSD! / Free Dedicated IP Address
  • IonSwitch_StanIonSwitch_Stan Member, Host Rep
    edited March 2018

    So why don't we see this more regularly?

    Because most filesystem's don't easily support it. ZFS supports ZIL/L2ARC on NVME, but then you have the complexity of ZFS and memory that is given up to ZFS -- and for most servers ram is at a premium over disk space. CEPH/etc are overly complicated for smaller setups.

    Additionally, people wouldn't pay a premium for it as they do with SSD storage. Storage servers are already low ROI.

    I would love to be able to run NVME to get higher IOP's from our SSD hardware arrays for the IOP offloading, but I don't believe lvm-cache will add much of a benefit.

Sign In or Register to comment.