Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Advertise on LowEndTalk.com

Latest LowEndBox Offers

    Why are DD results dependant on OS?
    New on LowEndTalk? Please read our 'Community Rules' by clicking on it in the right menu!

    Why are DD results dependant on OS?

    KuJoeKuJoe Member, Provider
    edited July 2011 in Help

    I'm not to familiar with how DD works, all I know is it's useful for generating "empty" files and some people use it as a crude benchmark of disk I/O. I was running some tests on the Xen templates I received and noticed that the DD results for each OS is very random. Anybody care to shed some light on this for me?

    Here are some examples (all are x86_64):

    • CentOS 5.6 = 50MB/s (3 tests)
    • Debian 5.0 = 71MB/s (3 tests)
    • Debian 6.0 = 12-13 KB/s (4 tests, possible bug with template)
    • Gentoo 2011.07 = 53MB/s (3 tests)
    • Slackware 13.37 = 31-45MB/s (5 tests)
    • Ubuntu 10.04 = 64-67MB/s (3 tests)
    • Ubuntu 11.04 = 49-56MB/s (3 tests)
    Thanked by 1Nexus
    -Joe @ SecureDragon - LEB's Powered by Wyvern in FL, CO, CA, IL, NJ, GA, OR, TX, and AZ
    Need backup space? Check out BackupDragon

    Comments

    • May we have the full dd command that you use to create the test files?

      Thanked by 1Nexus

      © 2011-2019 eLohkCalb

    • KuJoeKuJoe Member, Provider

      dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

      Thanked by 1Nexus
      -Joe @ SecureDragon - LEB's Powered by Wyvern in FL, CO, CA, IL, NJ, GA, OR, TX, and AZ
      Need backup space? Check out BackupDragon
    • dannixdannix Member

      Did you use the same host node with different templates, or different host nodes. Could be that block size of the host filesystem was not the same?

      Thanked by 1Nexus
    • KuJoeKuJoe Member, Provider

      Same exact VPS (just kept rebuilding the OS) on the same exact node.

      Thanked by 1Nexus
      -Joe @ SecureDragon - LEB's Powered by Wyvern in FL, CO, CA, IL, NJ, GA, OR, TX, and AZ
      Need backup space? Check out BackupDragon
    • miTgiBmiTgiB Member

      Xen PV or HVM? PV the host will probably format the LV, HVM the OS you are installing will format the virtual disk.

      Thanked by 1Nexus
      Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
    • Hmm, isn't it the commands being used that are dependent? Such as the value under bs= .

      Thanked by 1Nexus
    • IxapeIxape Member

      Interesting - Would have thought it would all be the same :s

      Thanked by 1Nexus
    • miTgiBmiTgiB Member

      @Ixape You'd think that, but how the filesystem is formatted has just as much to it as how the test is run. xfs will be faster then ext3, if you look at my post on LETv2 I rigged the test using software raid6 yet beating out nearly every other provider represented in the thread. So how the filesystem is formatted, sector size of the format, chunk/stripe size of any underlying raid and test parameters can all factor into different results.

      Thanked by 1Nexus
      Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
    • KuJoeKuJoe Member, Provider

      Tests were run with Xen PV so I'm assuming all of the LVMs were formatted the same?

      Thanked by 1Nexus
      -Joe @ SecureDragon - LEB's Powered by Wyvern in FL, CO, CA, IL, NJ, GA, OR, TX, and AZ
      Need backup space? Check out BackupDragon
    • @Joe, could be that the problem with debian 6 is because of apt-xapian-index package which
      provides /usr/sbin/update-apt-xapian-index. This script uses a lot of memory and needs a lot of disk I/O (see #305554)

      I was trying to update from debian 5 template to debian 6 with:

      xfl:~#  dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
      16384+0 records in
      16384+0 records out
      1073741824 bytes (1.1 GB) copied, 14.9736 s, 71.7 MB/s
      xfl:~#  sed -e 's/lenny/squeeze/g'  -i /etc/apt/sources.list
      xfl:~#  aptitude update
      xfl:~#  dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
      16384+0 records in
      16384+0 records out
      1073741824 bytes (1.1 GB) copied, 15.7812 s, 68.0 MB/s
      xfl:~#  aptitude -R install aptitude
      xfl:~#  dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
      16384+0 records in
      16384+0 records out
      1073741824 bytes (1.1 GB) copied, 16.1124 s, 66.6 MB/s
      xfl:~#  aptitude full-upgrade
      xfl:~#  dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
      

      and got pretty decent dd results after the update (didn't install apt-xapian-index, recommended by aptitude)

      Thanked by 1Nexus
    • got lenny, changed apt/sources.list and did a dist-upgrade.

      no problems with disk write speed (about 60MB/s)

      so now i'm running this version, when a working template will be avaiable i just rsync my data to my openz securedragon vps, install queeze and rsync my data back.

      Thanked by 1Nexus
    • I'm also running debian 6 now, but it is x86_64, and I would prefer 32bit version.

      Thanked by 1Nexus
    • KuJoeKuJoe Member, Provider

      I tested the Debian 6 x86_64 template again with the same results. I've got the 32 bit template installed and will test it shortly.

      Thanked by 1Nexus
      -Joe @ SecureDragon - LEB's Powered by Wyvern in FL, CO, CA, IL, NJ, GA, OR, TX, and AZ
      Need backup space? Check out BackupDragon
    • Hmm... Could it be the fdatasync option that caused the difference? What about using oflag=dsync instead?

      Thanked by 1Nexus

      © 2011-2019 eLohkCalb

    • KuJoeKuJoe Member, Provider

      I think it's the 100% CPU load that might be the culprit.

      Thanked by 1Nexus
      -Joe @ SecureDragon - LEB's Powered by Wyvern in FL, CO, CA, IL, NJ, GA, OR, TX, and AZ
      Need backup space? Check out BackupDragon
    • KuJoe: I noticed the same but never gave it another thought but here are some samples from my tests:

      I used: dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

      Ubuntu: 298 MB/s
      CentOS: 112 MB/s

      Very strange why CentOS perform so poor ...

      Thanked by 1Nexus
      NordicVPS.com - Unmanaged XEN and KVM VPS in US and EU - SolusVM - OpenVZ with VSwap
    Sign In or Register to comment.