Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


I/O scheduler change = 500x performance on SSD?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

I/O scheduler change = 500x performance on SSD?

raindog308raindog308 Administrator, Veteran

I came across this today:

https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html

Scroll down to "Check your I/O scheduler". 500x improvement from moving from cfq to deadline when using SSD? Quite a claim.

The claimants here are Elastic Search, who doesn't sell hardware and are interested in performance, so they have no motivation to invent things, though I imagine YMMV.

I'm guessing this isn't exposed or significant if you make the change at the VM level.

Thanked by 1GCat

Comments

  • Claims seem crazy! - Though the logic makes sense, surely? I wonder if this affects lifespan and wear on an SSD aswell?

    Thanked by 1ollietrex
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    CFQ's pretty lame on any kernel that isn't late 3.x. In the late 3.x's they redid it so it doesn't suck so much ass on SSD's and it should be comparable.

    For SSD's you should be looking at NOOP or deadline though. This will break your ability to ionice things but if you're doing pure SSD that shouldn't be a big concern.

    Francisco

    Thanked by 1GCat
  • miTgiBmiTgiB Member

    raindog308 said: I'm guessing this isn't exposed or significant if you make the change at the VM level.

    I don't know about 500x improvement, but it is significant, and pretty common knowledge in most SSD tuning guides I've seen. Works best at the node level, but also in the VM.

    Thanked by 1vimalware
  • i thought everyone knew this already

    Thanked by 3vimalware Damian GCat
  • jarjar Patron Provider, Top Host, Veteran

    I dropped average iowait time by 10% by switching from CFQ to NOOP. It's crazy what a little change like that can do.

  • miTgiBmiTgiB Member

    jarland said: CFQ to NOOP

    elevator=deadline is what I get the best bump with CentOS 7 KVM nodes.

    Thanked by 2jar black
  • blackblack Member
    edited May 2016

    Is that a CentOS thing? default on Debian is

    cat /sys/block/sda/queue/scheduler
    noop deadline [cfq] 

    Edit: Nevermind. It's using cfq by default.

     echo deadline > /sys/block/YOURDRIVE/queue/scheduler 

    will change the scheduler.

  • KrisKris Member
    elevator=noop

    gave me the best boost in a Hyper-V HA environment.

    Also anything more than 8 CPUs or 30GB of RAM in Hyper-V, and presumably VMWare, make sure to turn NUMA off at kernel level to utilize all CPUs.

    Per: Hyper-V Best Practices / geekbench scores

  • raindog308raindog308 Administrator, Veteran

    @texteditor said:
    i thought everyone knew this already

    I didn't but my sysadmin fu is admittedly rusty.

  • david_Wdavid_W Member

    noop is the way to go for SSDs!

  • Is this for servers only, or would a personal laptop running with an SSD benefit >at all< from this, bearing in mind it is FDE using Luks - should that make an impact?

    Thanked by 1yomero
  • For a desktop/laptop, switch to BFQ if you are comfortable with non-default kernel patches like linux-ck.

    While NOOP might do well in synthetic benchmarks, batching together your writes is not a bad thing. So go with deadline for default kernels.

Sign In or Register to comment.