New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Why are DD results dependant on OS?
I'm not to familiar with how DD works, all I know is it's useful for generating "empty" files and some people use it as a crude benchmark of disk I/O. I was running some tests on the Xen templates I received and noticed that the DD results for each OS is very random. Anybody care to shed some light on this for me?
Here are some examples (all are x86_64):
- CentOS 5.6 = 50MB/s (3 tests)
- Debian 5.0 = 71MB/s (3 tests)
- Debian 6.0 = 12-13 KB/s (4 tests, possible bug with template)
- Gentoo 2011.07 = 53MB/s (3 tests)
- Slackware 13.37 = 31-45MB/s (5 tests)
- Ubuntu 10.04 = 64-67MB/s (3 tests)
- Ubuntu 11.04 = 49-56MB/s (3 tests)
Thanked by 1Nexus
Comments
May we have the full dd command that you use to create the test files?
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
Did you use the same host node with different templates, or different host nodes. Could be that block size of the host filesystem was not the same?
Same exact VPS (just kept rebuilding the OS) on the same exact node.
Xen PV or HVM? PV the host will probably format the LV, HVM the OS you are installing will format the virtual disk.
Hmm, isn't it the commands being used that are dependent? Such as the value under bs= .
Interesting - Would have thought it would all be the same
@Ixape You'd think that, but how the filesystem is formatted has just as much to it as how the test is run. xfs will be faster then ext3, if you look at my post on LETv2 I rigged the test using software raid6 yet beating out nearly every other provider represented in the thread. So how the filesystem is formatted, sector size of the format, chunk/stripe size of any underlying raid and test parameters can all factor into different results.
Tests were run with Xen PV so I'm assuming all of the LVMs were formatted the same?
@Joe, could be that the problem with debian 6 is because of apt-xapian-index package which
provides /usr/sbin/update-apt-xapian-index. This script uses a lot of memory and needs a lot of disk I/O (see #305554)
I was trying to update from debian 5 template to debian 6 with:
and got pretty decent dd results after the update (didn't install apt-xapian-index, recommended by aptitude)
got lenny, changed apt/sources.list and did a dist-upgrade.
no problems with disk write speed (about 60MB/s)
so now i'm running this version, when a working template will be avaiable i just rsync my data to my openz securedragon vps, install queeze and rsync my data back.
I'm also running debian 6 now, but it is x86_64, and I would prefer 32bit version.
I tested the Debian 6 x86_64 template again with the same results. I've got the 32 bit template installed and will test it shortly.
Hmm... Could it be the fdatasync option that caused the difference? What about using oflag=dsync instead?
I think it's the 100% CPU load that might be the culprit.
KuJoe: I noticed the same but never gave it another thought but here are some samples from my tests:
I used: dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
Ubuntu: 298 MB/s
CentOS: 112 MB/s
Very strange why CentOS perform so poor ...