New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I used to have better impressions about ioping, but it seems to vary so wildly even with us that I changed my opinion.
No one has posted ours so I picked our two nodes with the most load
LA
ioping /root
4096 bytes from /root (ext3 /dev/root): request=1 time=0.1 ms
4096 bytes from /root (ext3 /dev/root): request=2 time=0.2 ms
4096 bytes from /root (ext3 /dev/root): request=3 time=0.1 ms
4096 bytes from /root (ext3 /dev/root): request=4 time=0.2 ms
4096 bytes from /root (ext3 /dev/root): request=5 time=0.2 ms
--- /root (ext3 /dev/root) ioping statistics ---
5 requests completed in 4637.8 ms, 6748 iops, 26.4 mb/s
min/avg/max/mdev = 0.1/0.1/0.2/0.0 ms
UK
ioping /root
4096 bytes from /root (ext4 /dev/sda1): request=1 time=0.1 ms
4096 bytes from /root (ext4 /dev/sda1): request=2 time=0.1 ms
4096 bytes from /root (ext4 /dev/sda1): request=3 time=0.1 ms
4096 bytes from /root (ext4 /dev/sda1): request=4 time=0.1 ms
^C
--- /root (ext4 /dev/sda1) ioping statistics ---
4 requests completed in 3391.8 ms, 9756 iops, 38.1 mb/s
min/avg/max/mdev = 0.1/0.1/0.1/0.0 ms
Our most loaded Germany node
ioping /root
4096 bytes from /root (ext4 /dev/md2): request=1 time=0.3 ms
4096 bytes from /root (ext4 /dev/md2): request=2 time=0.3 ms
4096 bytes from /root (ext4 /dev/md2): request=3 time=0.2 ms
4096 bytes from /root (ext4 /dev/md2): request=4 time=0.3 ms
^C
--- /root ioping statistics ---
4 requests completed in 3254.6 ms, 3880 iops, 15.2 mb/s
min/avg/max/mdev = 0.2/0.3/0.3/0.0 ms
Indeed:
Not sure what server @wlanboy is on, but I'm guessing it's the same as the other two. Variously testing it myself, I get anywhere from 160 iops to 7,000. FWIW, the server is nearly to capacity, and most of its residents are happy; we haven't had a complaint from anyone on that node for the entire time it's been operational (about 5 months now).
This result is on an CacheCade server that's not doing much at the moment: 10 requests completed in 9002.2 ms, 8177 iops, 31.9 mb/s
This result is on our last remaining RAID 5 server: 10 requests completed in 9003.8 ms, 5198 iops, 20.3 mb/s
This result on a server that we've marked as 'full' and no longer provision new accounts on: 10 requests completed in 9001.6 ms, 12469 iops, 48.7 mb/s
Not sure what to think of this now....
It feels nearly useless... You remember the good results on my KnownHost VPS?
Here is a series of results, taken with 10 sec. pause in between:
Bullshit. Just like the 'dd' tests. In future, I will just take reference to my Munin Graphs and how the VPS "feels".
Our own VPS, Starter Plan:
[root@bench ioping-0.6]# ./ioping -c 10 .
4096 bytes from . (simfs /vz/private/1293): request=1 time=0.2 ms
4096 bytes from . (simfs /vz/private/1293): request=2 time=0.4 ms
4096 bytes from . (simfs /vz/private/1293): request=3 time=7.2 ms
4096 bytes from . (simfs /vz/private/1293): request=4 time=0.3 ms
4096 bytes from . (simfs /vz/private/1293): request=5 time=0.4 ms
4096 bytes from . (simfs /vz/private/1293): request=6 time=0.4 ms
4096 bytes from . (simfs /vz/private/1293): request=7 time=0.3 ms
4096 bytes from . (simfs /vz/private/1293): request=8 time=0.3 ms
4096 bytes from . (simfs /vz/private/1293): request=9 time=0.3 ms
4096 bytes from . (simfs /vz/private/1293): request=10 time=0.3 ms
--- . (simfs /vz/private/1293) ioping statistics ---
10 requests completed in 9011.2 ms, 983 iops, 3.8 mb/s
min/avg/max/mdev = 0.2/1.0/7.2/2.1 ms
Run three times, and picked the best one, lower than I expected but it's on a node with around 60 containers.
dd and ioping readings can have a wide variation depending on when you take them and they're just one factor to consider when evaluating the performance of a VPS.
ioping and dd readings for CloudVPS (and other cloud hosts like Rackspace, Dediserve, etc) tend to be relatively low with iopings averaging 600-1200 and dd's around 80-120 Mb/s but when you take other factors into consideration (server reliability, uptime, network reliability/performance, a consistent performance throughout the day without wild swings in performance) they're a much better choice to use as hosts for sites that you rely on to put food in your mouth than some of the hosts who optimize their iopings to above 10,000 but don't have the infrastructure to offer a high availability solution.
On the other hand, there are times when poor ioping and dd test results make complete sense and paint a perfect picture of a host who oversells their nodes so much that the entire node runs out of disk space and crashes and their super bargain VPS is only suitable as a backup server (points at a LEB provider in NL who fits this description)
Here's the Munin graphs for the aforementioned server:
ioping show odd result with ssd. For example I'm just installing a new node, just provisioned the skeleton and this is the output of ioping:
as you see it seems not so good. But the server is flying and took nothing to install :-)
this is a read test using fio:
a write test
and a not so useful dd:
Doh.
Love how this thread comes about after i have paid for a better RAID/BBU card put in our NJ node due to dodgy IOPing results, looking at everyone's results it was fine as it was.
10 requests completed in 9006.2 ms, 2384 iops, 9.3 mb/s
min/avg/max/mdev = 0.2/0.4/0.9/0.2 ms
Well, at least nobody had below 2.2 k mark with us, but still, from 1x to 4x is a very wide gap.
Maybe there's something with ioping ? Unlikely but results like Amitz shows cant be attributed to load only not to mention quality of storage as that doesnt vary 3 times a minute.
I tested on of our systems 40 times back to back to back and got everywhere from 900+ to 3700+ same system back to back, not sure what this really means or if it means anything at all.
This is consistent to what ppl get from us too, from 1x to 4x, in our case some 2.2 to 10+ K
Meh, my opinion about ioping was overrated At least the concept is not flawed from the start, such as dd as a test
So I really think the iops thing again as long as it is over 800 are you really noticing anything? different?
@Damian
I am on node citrine.
min/avg/max/mdev = 0.1/0.1/0.1/0.0 ms
@24khost
If I am comparing ipxcore's 10-11k iop to vpscheap's 3-6k iop I see factor 2x,
I can say that something like wordpress or a ruby script using interpreter and mysql is at least two times faster on ipxcore.
First page loading on ipxcore: 600ms.
First page loading on vpscheap: 1300ms.
Conclusion:
Even 5k iops can be part of a slow system if something like the CPU is not sufficient.
PS:
I am running a wordoress blog on my ipxcore vm. The identical backup is running on vpscheap (lighttpd + php + mysql). Using vpscheap vm as "development" environment for new themes, plugins, updates, etc.
But I do not want to install and configure the whole setup to compare vms...
@wlanboy how are you measuring that?
it may load 700 ms faster but is anybody gonna notice that? not likely.
@24khost
Firefox -> Web-Developer Tools -> Web-Console [Ctrl + Shift + K]
A detailed list of all net requests (just deactivate css and js)
e.g. lowendtalk:
GET http://www.lowendtalk.com/themes/lowendtalk/design/style.css?v=1.02 [HTTP/1.1 304 Not Modified 141ms]
GET http://www.lowendtalk.com/plugins/Tagging/design/tag.css?v=1.3.2 [HTTP/1.1 304 Not Modified 297ms]
GET http://www.lowendtalk.com/js/library/jquery.js?v=2.1a4 [HTTP/1.1 304 Not Modified 281ms]
GET http://www.lowendtalk.com/js/library/jquery.livequery.js?v=2.1a4 [HTTP/1.1 304 Not Modified 297ms]
GET http://www.lowendtalk.com/js/library/jquery.form.js?v=2.1a4 [HTTP/1.1 304 Not Modified 266ms]
GET http://www.lowendtalk.com/js/library/jquery.popup.js?v=2.1a4 [HTTP/1.1 304 Not Modified 266ms]
GET http://www.lowendtalk.com/js/library/jquery.gardenhandleajaxform.js?v=2.1a4 [HTTP/1.1 304 Not Modified 312ms]
GET http://www.lowendtalk.com/js/global.js?v=2.1a4 [HTTP/1.1 304 Not Modified 406ms]
GET http://www.lowendtalk.com/applications/vanilla/js/bookmark.js?v=2.1a4 [HTTP/1.1 304 Not Modified 406ms]
GET http://www.lowendtalk.com/applications/vanilla/js/discussions.js?v=2.1a4 [HTTP/1.1 304 Not Modified 422ms]
GET http://www.lowendtalk.com/applications/vanilla/js/options.js?v=2.1a4 [HTTP/1.1 304 Not Modified 422ms]
GET http://www.lowendtalk.com/js/library/jquery.gardenmorepager.js?v=2.1a4 [HTTP/1.1 304 Not Modified 437ms]
And yes if the plain html output of a site needs more than one second (1300ms) the website feels slow. (css, js and pictures are loaded afterwards).
root@miami:~# ./ioping -c 10 .
-bash: ./ioping: No such file or directory
thanks @jack
budgetvm dallas:
10 requests completed in 9005.2 ms, 2860 iops, 11.2 mb/s
budgetvm miami:
10 requests completed in 9002.3 ms, 8532 iops, 33.3 mb/s
let me test on the other places.
A server with 2 crappy seagate barracuda drives in software Raid 1. Now its too late, I cant go through the trouble of having the drives changed and copying everything back. But then, am not seeing much of a performance issue. Have about 200 shared accounts on it.
min/avg/max/mdev = 0.1/0.1/0.1/0.0 ms
This is from @concerto49's stuff.
Not too happy with that. Going to increase it later. This made me spend the night reading specs, benchmarks and hardware data sheets.
700ms is forever, especially when talking about drives.
If the page load is 700ms more, that's 3x longer than recommended and stack that on top of whatever the actual load really is. MIght be 1-5 seconds. Should be 250ms total all inclusive shipped out the door.
Yeah, that is..
2x 500 GB random Toshiba 7200s in RAID 1:
10 requests completed in 9030.4 ms, 342 iops, 1.3 mb/s
min/avg/max/mdev = 0.1/2.9/15.0/5.6 ms
4x 1 TB Ultrastars in SW RAID 10:
10 requests completed in 9036.6 ms, 281 iops, 1.1 mb/s
min/avg/max/mdev = 0.0/3.6/15.6/5.9 ms
Hostigation LAX Location:
10 requests completed in 9006.1 ms, 3542 iops, 13.8 mb/s
min/avg/max/mdev = 0.2/0.3/0.5/0.1 ms
OVH KS 2G:
10 requests completed in 9009.8 ms, 1221 iops, 4.8 mb/s
min/avg/max/mdev = 0.2/0.8/6.4/1.9 ms
I know the OVH is an Atom powered server and not a VPS, figured I would post it anyway, pricing is similar to VPS.
Newly activated TorqHost:
Trial VPS with Indonesia location (non-LEB):
What's wrong with your slow I/O?
I love FitVPS.
And I hate QHoster.
@pubcrawler but that is not the issue. His website page loads may be different based on where the servers are located. His comparison is wrong. Sure when you look at the ms on a drive it matters but he isn't going to see much actual difference in page load. Since they may take different routes and there geographic locations may differs.
He is comparing apples to oranges