All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Can someone explain me this?
First of all, this isn't a important question. It is just something I noticed and I would love to learn about these kind of things.
I got 2 servers, one at Ipxcore @Damian and one at Ramnode @Nick_A (Can recommend them both!)
Doing this command:
dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -rf iotest
Gives me this at Ramnode:
root@server:~# dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -rf iotest
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.99496 s, 269 MB/s
Where ipxcore is giving me this:
root@mon:~# dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -rf iotest
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.4859 s, 102 MB/s
That is obvious, Ramnode is cached and ipxcore isn't.
Lets take this command:
dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
From my understand the blocks will be 64K (Instead of 1M) and 16 times as many. Assuming ssd's are much faster with writing I was expecting Ramnode to be faster. But they wheren't:
Ramnode:
root@server:~# dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 13.6672 s, 78.6 MB/s
Ipxcore:
root@mon:~# dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.0707 s, 107 MB/s
Now my question, how and why?
I'm really sorry if this is a noob question, just wondering why
Comments
The block size is 64k on both of these, that's what the bs= variable means.
Our things are tuned for high system 'interactivity'; the system's focus is to 'feel' fast. They're not tuned for contiguous block writes, which is why our dd numbers are usually quite lackluster, yet, everyone loves our service.
So it was clear in the last chat? :P
Still amazing how HDD's can performe better in this situation against SSD's.
Ehh I don't know about that... all of our servers from this point onward will probably be SSD-powered, since we're not getting good utilization of the rest of the server before we run out of disk i/o. Same prices, though.
I asked about the gbit connection but didn't dare to ask about ssd's.... I should have done it.
What about the diskspace? Will it be full ssd or ssd cached?
The right-side spikes are from me doing the dd test repeatedly. I would imagine that Nick's "throughput per device" graph would be higher up on the chart.
Depends... if the Crucial 960gb SSDs turn out to be reliable or someone else comes out with a reliable SSD for the same amount of space in the same price range, it'll be full SSD. If not, then it'll be SSD cached. Disk space will remain the same either way.
Awesome, i do need the disk space.
drowling
http://romanrm.ru/en/dd-benchmark