New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
400MB/s to 1GB/s I think. Got 1~1.3GB/s on my OAH 512mb openvz VPS.
With the above command I get from 400MB/s to 1GB/s on SATAIII RAID1, so either those SATA disks are good, or the SSD's on your servers are being oversold. (Yes, this is a actively used node)
Intel 520 ssd 180gb in Dell 1121 http://namhuy.net/1541/intel-520-ssd-180gb-dell-1121-ubuntu.html
Hmm... what's the point of using dd to test sequential IO?
You're not getting that to the disk. Cache maybe, but certainly not disk.
What test you suggest?
There isn't one. DD tests for performance are almost useless. As you say, DD tests sequential IO which is a useless test (especially for SSDs) because:
a) Real life load is random, not sequential. It's like testing your car on a rolling (fake) road. I don't care if my car can get 10000MPG on a rolling road, if it's only going to get 5MPG on a real-life load. It's real life that matters.
b) There is no real way to get a true sequential result (even with fdatasync), as there is a lot of caching going on. fdatasync will disable the filesystem caching, but there could be many layers underneath affecting your result.
c) SSDs really, really shine in their random (IOPS) performance. A standard HDD will give you only about 75-100 IOPS. An SSD can give you 10,000 IOPS per second!! SSDs IOPS rate don't really drop with random vs sequential, as they are all electrical. HDDs IOPS rate do vary, as seek times are much longer for random IO (The head has to physically move!).
and lastly, one which is specific to hosting
d) Every provider will set up their infrastructure differently, so it's hard to compare anything with DD. For example, we like to split our customers over small RAID10 arrays, meaning if something goes wrong, the smallest number of customers are affected. Some providers have one, huge, bejassive array, with tens of disks etc that will give excellent DD results. But if that one array fails, goodbye everyone!
The only thing that a really, really low DD result will tell you, is if a host's setup is very overloaded and is about to fall over in a heap. But you don't need DD to tell you that, as you're VPS will be running like crap anyway!
I hope this helps you @ehab
Thanks
Jonny
Meight be some glitch or whatever but:
With the OP's dd test command:
xxx@play:~$ dd if=/dev/zero of=tempfile bs=1M count=200 conv=fdatasync,notrunc 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 0.213542 s, 982 MB/s
With normal DD usually done here on LE*:
xxx@play:~$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 5.42464 s, 198 MB/s
The above is still very good for RAID1, I believe.
The first one is 210MB copied, the second one is 1.1GB copied.
Missed the difference in line. I assumed it was the typical LET disk thrasher.
198 MB/s is good for raid 1
@Jonny , very informative, Thank you for your time and help.
I wish to thank other who commented so far.
Can't RAID fail as well?
everything will fail. Sooner or later.
That's why offsite backup is important.
Depends on what exactly do you wish to measure.
As was said above, random I/O is the strong side of SSDs.
So, talking of tests, use iozone, fio, ioping (force cache off when testing). Even better, test real-life cases (i.e. run real-life large DB with complex concurrent queries).
Also, see Phoronix suite, there are many random I/O tests there.
@Master_Bo , understood, copied your text and will practice at one stage. Thanks.
Missed you lately.
So did I. At times reality's requests are hard to ignore.
Glad to see you, as well.