Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Subscribe to our newsletter

Advertise on LowEndTalk.com

Latest LowEndBox Offers

    RAID 1 Significant Performance Loss Dedicated Server
    New on LowEndTalk? Please read our 'Community Rules' by clicking on it in the right menu!

    RAID 1 Significant Performance Loss Dedicated Server

    Running some tests on a Hetzner auction server with 2x 512GB NVMe drives in RAID 1 and noticed that when I run bench.sh inside the rescue system (no RAID configured) I get massive IO speed, upwards of 2000 MB/s.

    But as soon as I install an OS and RAID 1 goes into effect, that speed drops to ~550-650 MB/s. Which is still plenty fast, but clearly well below what these drives are capable of.

    I also noticed something similar with an NForce dedi running a pair of Samsung 850/860 SSDs, where in single drive performance the IO speed would be ~580 MB/s, but in RAID 1 they would be capped at 400 MB/s.

    Has anyone noticed something like this before?

    In my case, I run a pretty latency-intensive database, so I'll take any additional IO speed gains I can get. So my question is, if software RAID 1 puts such a performance limit on SSD/NVMe drives, would I be better off going with a single drive application and implementing redundancy a different way?

    Thanks.

    Comments

    Sign In or Register to comment.