Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Test the disk I/O of your VPS - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Test the disk I/O of your VPS

1356714

Comments

  • A personal VPS of mine on our Germany Node.

    [root@test /]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 15.1204 seconds, 71.0 MB/s

    IntoVPS (512MB) where we host our master

    [root@master ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 18.8047 seconds, 57.1 MB/s

    :D

  • prefiber.nl (OpenVZ - VPS Linux #2)

    ksx4system@domare:/tmp$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 32.5573 s, 33.0 MB/s

    ultimahost.pl (OpenVZ - ovz.512MB, it's not lowendbox)

    ksx4system@maryland:/tmp$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 16.0894 s, 66.7 MB/s

    ramhost.us (OpenVZ - custom plan)

    ksx4system@magnus:/tmp$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 7.63024 s, 141 MB/s

  • again : hostigation 64 MB

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 6.98311 s, 154 MB/s
    

    VPSunlimited 1 GB XEN

    [[email protected]]#dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 5.03736 seconds, 213 MB/s
    
  • eva2000eva2000 Veteran
    edited September 2011

    Guys have you tried ioping for random disk i/o latency tests as well as sequential disk tests http://vbtechsupport.com/1239/ ?

  • MichaelMichael Member
    edited September 2011

    This is my QualityServers.co.uk OpenVZ Eliminator on VZ2UK...

    root@localhost:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 8.66047 s, 124 MB/s

    And my 128MB on VZ3UK...

    root@localhost:~$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 17.5009 s, 61.4 MB/s

  • new 512MB buyvm OpenVZ VPS fresh out of the box so to speak

    ioping.sh random disk i/o latency results (default is 4K request size) which include dd sequential disk test

    
    [root@web1 ~]# ./ioping.sh 
    -----------------------------------------
    ioping.sh 0.9.8 - http://vbtechsupport.com
    by George Liu (eva2000)
    -----------------------------------------
              ioping.sh 0.9.8 MENU
    -----------------------------------------
    1. Install ioping
    2. Re-install ioping
    3. Run ioping default tests
    4. Run ioping custom tests
    5. Exit
    -----------------------------------------
    Enter option [ 1 - 5 ] 3
    -----------------------------------------
    Virtuzzo OR OpenVZ Virtualisation detected
    
    ***************************************************
    ioping code.google.com/p/ioping/
    ioping.sh 0.9.8
    shell wrapper script by George Liu (eva2000)
    http://vbtechsupport.com
    ***************************************************
    
    Virtuzzo or OpenVZ Virtualisation detected
    **********************************
    dd (sequential disk speed test)...
    **********************************
    dd if=/dev/zero of=testfilex bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 8.82299 s, 122 MB/s
    
    ************************
    starting ioping tests...
    ***************************************************
    ioping disk I/O test (default 1MB working set)
    ***************************************************
    disk I/O: /
    --- / (simfs /dev/simfs) ioping statistics ---
    5 requests completed in 4052.3 ms, 295 iops, 1.2 mb/s
    min/avg/max/mdev = 0.7/3.4/5.4/1.9 ms
    
    **********************************************
    seek rate test (default 1MB working set)
    **********************************************
    seek rate: /
    --- / (simfs /dev/simfs) ioping statistics ---
    252 requests completed in 3007.9 ms, 414 iops, 1.6 mb/s
    min/avg/max/mdev = 0.3/2.4/43.0/4.2 ms
    
    **********************************************
    sequential test (default 1MB working set)
    **********************************************
    -----------------------
    sequential: /
    --- / (simfs /dev/simfs) ioping statistics ---
    35 requests completed in 3061.8 ms, 13 iops, 3.2 mb/s
    min/avg/max/mdev = 20.2/78.2/220.2/42.1 ms
    -----------------------
    sequential cached I/O: /
    --- / (simfs /dev/simfs) ioping statistics ---
    358 requests completed in 3003.8 ms, 4736 iops, 1184.0 mb/s
    min/avg/max/mdev = 0.1/0.2/3.4/0.3 ms
    
  • eva2000eva2000 Veteran
    edited September 2011

    proof that dd tests for sequential don't show true story

    2nd VPS with 1.5GB and 15GB of SSD fast disk space on intel X25-M, while dd tests slower than buyvm, the random disk I/O is 3-4x times faster

    1.2/1.6MB/s
    vs
    4.6/4.3MB/s

    # ./ioping.sh 
    ***************************************************
    ioping code.google.com/p/ioping/
    ioping.sh 0.9.8
    shell wrapper script by George Liu (eva2000)
    http://vbtechsupport.com
    ***************************************************
    
    Virtuzzo or OpenVZ Virtualisation detected
    **********************************
    dd (sequential disk speed test)...
    **********************************
    dd if=/dev/zero of=testfilex bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 11.5992 seconds, 92.6 MB/s
    
    ************************
    starting ioping tests...
    ***************************************************
    ioping disk I/O test (default 1MB working set)
    ***************************************************
    disk I/O: /
    --- / (simfs /dev/simfs) ioping statistics ---
    5 requests completed in 4009.3 ms, 1185 iops, 4.6 mb/s
    min/avg/max/mdev = 0.3/0.8/1.0/0.3 ms
    
    **********************************************
    seek rate test (default 1MB working set)
    **********************************************
    seek rate: /
    --- / (simfs /dev/simfs) ioping statistics ---
    1444 requests completed in 3001.3 ms, 1090 iops, 4.3 mb/s
    min/avg/max/mdev = 0.2/0.9/2.0/0.2 ms
    
    **********************************************
    sequential test (default 1MB working set)
    **********************************************
    -----------------------
    sequential: /
    --- / (simfs /dev/simfs) ioping statistics ---
    47 requests completed in 3007.6 ms, 16 iops, 4.0 mb/s
    min/avg/max/mdev = 45.0/62.7/217.5/23.2 ms
    -----------------------
    sequential cached I/O: /
    --- / (simfs /dev/simfs) ioping statistics ---
    2362 requests completed in 3001.2 ms, 2722 iops, 680.5 mb/s
    min/avg/max/mdev = 0.2/0.4/0.7/0.1 ms
    
  • The problem is that you should not use dd with an SSD. Both have a different way of handling disk-writing.

  • eva2000eva2000 Veteran
    edited September 2011

    so what be a better way for comparing ssd vs non-ssd for sequential disk i/o performance ?

    Reason here for ssd slower is the x25-m intel ssd sequential write speed rating is alot slower than probably the non-ssd vps. Not that ssd is slower.

    The comparison (ioping results) was to highlight random disk i/o

  • skagerrakskagerrak Member
    edited September 2011

    The reason is that SSD are normally much faster in reading than writing because before you can write on a SSD you need to delete the memory block-wise where you want to write on. Additionally, the SSD has the function of wear-leveling in order to save the blocks. The SSD will distribute the write-attemps on the whole SSD and not like on a HDD directly one after another. The writing speed of the SSD will steadily fall as soon as you have more and more data on it. Because the chip will not find an empty block, so it has to read the block first, delete its data and the write on it (read-modify-write). And this costs time. And as dd just copies sector-wise, it is not good for using it with an SSD.

    For SSD use hdparm -tT <device> instead.

    Thanked by 1AndreiGhesi
  • Yeah i know ssd's work differently from non-ssd disks, just wondering what would be a better tool for comparing sequential disk i/o performance for non-ssd vs ssds if dd isn't suited. Bonnie++, sysbench ?

    Of course detracting from the point with my ioping results, where random disk i/o is probably a better indicator of responsiveness/performance than sequential disk i/o in server environment.

  • skagerrakskagerrak Member
    edited September 2011

    I didn't try Bonnie++. I just used hdparm. Maybe I will have a look at Bonnie++.

    One might maybe also add, that the provider also has to take into consideration to optimise the use of a SSD in virtual environments, as well as your virtual linux has to. (noatime, data=writeback, elevator=noop)

  • yeah i usually use bonnie++ v1.03e http://www.coker.com.au/bonnie++/ but there's also 1.96 experimental release http://www.coker.com.au/bonnie++/experimental/ which seems to have memory to file allocation size requirements so tested file size needs to be twice memory allocated size

  • eva2000 said: Not that ssd is slower.

    ssd is slower at writing, but since there is no seek time to speak of they excel at reading data.

    Personally I do not think SSD is ready for VPS in production, no raid, hardware or software, support trim or GC. I'm sure a solution for this is around the corner.

  • What is the problem with SSD and RAID?

  • ATA-Trim does not work on multiple SSDs (RAID), just with individual SSDs. There is, however, sometimes a "garbage collection" on some SSD-chipsets, but this simply works as a profane defrag. The problem with SSDs is that you should not exceed more than about 85-90% of space, otherwise you might run out of free blocks.

  • Then create partition of size 80% of the whole SSD and create software RAID1 using that partition - 20% of the SSD will never be used or written to. Am i missing something?

  • This still does not help that there is no ATA-Trim and the read-modify-write.

  • [root@claw ~]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 37.1493 s, 28.9 MB/s
    [root@claw ~]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 36.5373 s, 29.4 MB/s
    [root@claw ~]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 36.1596 s, 29.7 MB/s

    Nice and stable, as it's the only VPS on my test server.

  • BuyVM OVZ 256MB:

    xxx@joy:~$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 10.2168 s, 105 MB/s
    

    Quality Servers Xen Eliminator:

    xxx@jasper:~$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 13.7222 s, 78.2 MB/s
    

    ZazoomVPS KVM 1GB:

    xxx@pearl:~$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 37.9963 s, 28.3 MB/s
    
  • @Infinity sorry or the delay in response, that vps is not in an active state as mentioned there's not much more I can provide as your not the actual client.

  • SECURE DRAGON open vz 96mb:

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 24.0446 s, 44.7 MB/s

    dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 27.6076 s, 38.9 MB/s

  • uptimevps 128 ovz 128:

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 12.0486 s, 89.1 MB/s

    dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 13.2432 s, 81.1 MB/s

  • Ok, we get a little redundancy in here... ^-^

  • host1plus.com, 1 GB XEN really slow :(

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 42.3543 seconds, 25.4 MB/s
    
  • InfinityInfinity Member, Host Rep

    I guess that is average. But I never liked host1plus. Is that meant to be a "cloud" VPS?

  • kiloservekiloserve Member
    edited September 2011

    From our BudgetBox XenPV plans...server is almost full, maybe 3 to 5 more clients left to go so the indicative speed should be relatively accurate for a full load server.

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 6.87557 seconds, 156 MB/s
  • tuxtux Member
    edited September 2011

    My cell phone

    Nokia-N900:~# time dd if=/dev/zero of=test bs=64k count=16k
    16384+0 records in
    16384+0 records out
    real    0m 51.34s
    user    0m 0.05s
    sys     0m 13.70s
  • SurmountedNETSurmountedNET Member
    edited September 2011

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 3.32847 seconds, 323 MB/s

  • tux said: My cell phone

    But that isn't with sync :P

This discussion has been closed.