New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Ever wonder how an Amazon EC2 performs?
Caveman122
Member
m3.xlarge 4 vCPU 13 ECU 15 GiB (2 x 40 SSD)?
Thanked by 1Mark_R
Comments
That I/O has to hurt a little.
I did a few benches, the results are consistent(ly low).
Was your server busy doing stuff while you took that benchmark? anything that could've highly influenced the outcome?
for me, both i/o and network speed are really good.. can get 20MB/s download speed from my online.net server for free tier and 40MB/s for c3large plan (Oregon)..
Get some provisioned IOPS and test again :P
It was idle, and this is not my server either. Disk I/O could very well be limited for whatever reason.
Why? If you're transferring over 65MB/s from the VPS, you're probably not using the disk anyways. The file is going to reside in memory.
IOPS is what really matters.
The IOPs will be interesting, you have a bit of a point there; but one is generally tied with the other [ slow throughput speed and low IOPs. ]
@OP:
Please run the following two non-destructive commands and post:
Basic IOPing:
IO Seek test:
Not at all. Many hosts, including us, do not cache sequential operations and reserve cache for commonly used files to boost performance. Sequential writing is a very bad way to test how fast a disk is since in real life scenarios, it is the least common. The only operation that uses large sequential writes is when you're copying a file or uploading to the server and how often do you do that at more than 65MB/s?
It won't let me use direct I/O
Please compare the following results, albeit it's just one sample.
My personal Continuous Integration [ build ] server:
Single i5-3470, 250mbps down/15mbps up cable, 1x HDD
One of our minimum plan server [256MB, 0.25CPU] instance:
Single E5-1650 v2, 2x 1gbps symmetric, 4x SSD [RAID10] config:
Ahh, without the DIRECT option it's going to be hard to compare, but those are some nice benchmarks either way. I've posted two comparison results above [ from 4x SSD and from 1x HDD. ] The results are skewed however in that the build server is currently idling [ first benchmark ], and the node is full [ second benchmark. ]
I don't get your point. One HDD shouldn't ever get 290MB/s so something is wrong there or it is being cached. Anyways, it still shows a major difference between sequential operations and IOPS. The HDD was ~3x slower than the SSD array for sequential and ~500x slower for random IO confirming what I was saying, sequential write tests are pretty useless.
The first test was run on a single HDD through CentOS 6.5 x86_64 running on VirtualBox [ whatever hypervisor / virtualization engine it uses ] running on Windows 8 x86_64, the single HDD is a Seagate 250GB SATA desktop drive. The second test was run on a CentOS 6.5 x86_64 running on KVM virtualization on CentOS 6.5 x86_64 .
Either way, you're right; my results [ while they did show that lower/lower and higher/higher went together ] didn't really "mean" anything, and it could probably go either way. [ Although I've yet to see very low sequential and high IOPs, or high IOPs and very low sequential. ]
These are our advanced SSD cached systems:
We disabled caching on sequential operations. So it only uses SATA for that, then it uses an SSD array for random I/O. It looks like Amazon does something similar above.
My volumedrive have better I/O :O
I/O or sequential write? Lol. Here we go again.
I/O lol
So it has more than 150K IOPS? or 260K if you're referring to Amazon.
My VolumeDrive VPS
You didn't read our conversation then. Whoever named that an I/O test was wrong. That only tests one form of I, not O at all (sequential write) and sequential writes are the least common in normal hosting situations.
Random reads are the most common operation, the exact opposite of what dd tests for.
The 65MB/s IO is enough for everything, users just like to see highly tuned results over the gb/s.
I have to agree with you on that..
One of my VPS's. This is on a FC array. I am aware that it is not useful as a benchmark!
root@www:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 2.351 s, 457 MB/s root@www:~#
INIZ VPS (HDD)
This is good or bad? I think 7 mb/s isn't so good
1800 IOPS is good for random I/O. SATA drives put out around 150-300. You may want to test with -RD though. It is doubtful that any one VPS needs more than that, but sometimes it is good to have SSD speeds especially when you have a lot of virtual servers sharing one array. I'd say if your speed is consistent and you don't have any excess load or disk wait, then you have nothing to worry about. Benchmarking high numbers means little in most cases. As long as you have enough IOPS to support your demand.
I can transfer 3TB per second with a single SATA drive.
The trick is to throw the drive a short distance so it doesn't take long to hit the target.
I mean since we're benchmarking useless statistics. I win.
Is that for the block storage or for the standard storage you get with each instance?
It seemed to be SAN. It was xvda/xvdb on iSCSI.
Probably worthwhile to peruse through the performance primer Datadog assembled
http://www.datadoghq.com/wp-content/uploads/2013/07/top_5_aws_ec2_performance_problems_ebook.pdf