New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
I/O scheduler change = 500x performance on SSD?
raindog308
Administrator, Veteran
in General
I came across this today:
https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html
Scroll down to "Check your I/O scheduler". 500x improvement from moving from cfq to deadline when using SSD? Quite a claim.
The claimants here are Elastic Search, who doesn't sell hardware and are interested in performance, so they have no motivation to invent things, though I imagine YMMV.
I'm guessing this isn't exposed or significant if you make the change at the VM level.
Thanked by 1GCat
Comments
Claims seem crazy! - Though the logic makes sense, surely? I wonder if this affects lifespan and wear on an SSD aswell?
CFQ's pretty lame on any kernel that isn't late 3.x. In the late 3.x's they redid it so it doesn't suck so much ass on SSD's and it should be comparable.
For SSD's you should be looking at NOOP or deadline though. This will break your ability to ionice things but if you're doing pure SSD that shouldn't be a big concern.
Francisco
I don't know about 500x improvement, but it is significant, and pretty common knowledge in most SSD tuning guides I've seen. Works best at the node level, but also in the VM.
i thought everyone knew this already
I dropped average iowait time by 10% by switching from CFQ to NOOP. It's crazy what a little change like that can do.
elevator=deadline
is what I get the best bump with CentOS 7 KVM nodes.Is that a CentOS thing? default on Debian is
Edit: Nevermind. It's using cfq by default.
will change the scheduler.
gave me the best boost in a Hyper-V HA environment.
Also anything more than 8 CPUs or 30GB of RAM in Hyper-V, and presumably VMWare, make sure to turn NUMA off at kernel level to utilize all CPUs.
Per: Hyper-V Best Practices / geekbench scores
I didn't but my sysadmin fu is admittedly rusty.
noop is the way to go for SSDs!
Is this for servers only, or would a personal laptop running with an SSD benefit >at all< from this, bearing in mind it is FDE using Luks - should that make an impact?
For a desktop/laptop, switch to BFQ if you are comfortable with non-default kernel patches like linux-ck.
While NOOP might do well in synthetic benchmarks, batching together your writes is not a bad thing. So go with
deadline
for default kernels.