Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with OpenID
Advertise on LowEndTalk.com

In this Discussion

Test the disk I/O of your VPS - Page 2

Test the disk I/O of your VPS

245678

Comments

  • InfinityInfinity Retired Staff

    Is that your localhost :S Why didn't you change the hostname.

    我是一个巨魔 (;

  • tuxtux Member

    It is default host name on Arch Linux template.

  • @Infinity which node are you on, open a ticket that is not normal.

    Los Angeles VPS Inside of QuadraNet Datacenter http://semoweb.com/vps.html

  • InfinityInfinity Retired Staff

    semoweb said: @Infinity which node are you on, open a ticket that is not normal.

    I don't really know as I am not the account holder and haven't done and when I traceroute it nothing happens, it is my ISP's problem. It all times out for lots of other servers in different datacenters.

    64.31.44.147 is the IP, the VPS has had lots of problems, not all SemoWeb's fault, we got lots of attacks etc. our server very recently failed to start up properly, we would reboot it and it would power itself down again, we hadn't exceeded our BW limit like we had done previously. We bought enough BW.

    We are going to transfer them all to a cPanel server soon.

    我是一个巨魔 (;

  • @Infinity Ah, just found it , the vps is not active, the output is a lot higher than you mentioned but still not up to par so thanks for bringing it to our attention and we will be reviewing that node now.

    Los Angeles VPS Inside of QuadraNet Datacenter http://semoweb.com/vps.html

  • InfinityInfinity Retired Staff

    Our VPS is down yet again.

    我是一个巨魔 (;

  • @Infinity Your comment is towards the vps with us?

    Los Angeles VPS Inside of QuadraNet Datacenter http://semoweb.com/vps.html

  • InfinityInfinity Retired Staff

    Yep

    我是一个巨魔 (;

  • @Infinity, per the Ip you supplied above you may want to speak with the account holder as this is not in an active state I cant say much as your not the account holder.

    How ever if you are added or have the login details to the client area you can also submit a ticket to find out more.

    Los Angeles VPS Inside of QuadraNet Datacenter http://semoweb.com/vps.html

  • InfinityInfinity Retired Staff

    Nope. The server keeps going offline then back. Nothing to do with anything we are running. As far as I'm concerned this is rubbish service.

    As for the account holder, he is on holiday at the moment (yep a long one).

    我是一个巨魔 (;

  • A personal VPS of mine on our Germany Node.

    [root@test /]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 15.1204 seconds, 71.0 MB/s

    IntoVPS (512MB) where we host our master

    [root@master ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 18.8047 seconds, 57.1 MB/s

    :D

  • prefiber.nl (OpenVZ - VPS Linux #2)

    ksx4system@domare:/tmp$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 32.5573 s, 33.0 MB/s

    ultimahost.pl (OpenVZ - ovz.512MB, it's not lowendbox)

    ksx4system@maryland:/tmp$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 16.0894 s, 66.7 MB/s

    ramhost.us (OpenVZ - custom plan)

    ksx4system@magnus:/tmp$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 7.63024 s, 141 MB/s

    ksx4system.net <--- my homepage

  • again : hostigation 64 MB

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 6.98311 s, 154 MB/s
    

    VPSunlimited 1 GB XEN

    [[email protected]]#dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 5.03736 seconds, 213 MB/s
    

    slow motion

  • eva2000eva2000 Member
    edited September 2011

    Guys have you tried ioping for random disk i/o latency tests as well as sequential disk tests http://vbtechsupport.com/1239/ ?

    Centmin Mod Project (ngx_pagespeed + SPDY support)
  • MichaelMichael Member
    edited September 2011

    This is my QualityServers.co.uk OpenVZ Eliminator on VZ2UK...

    root@localhost:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 8.66047 s, 124 MB/s

    And my 128MB on VZ3UK...

    root@localhost:~$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 17.5009 s, 61.4 MB/s

  • new 512MB buyvm OpenVZ VPS fresh out of the box so to speak

    ioping.sh random disk i/o latency results (default is 4K request size) which include dd sequential disk test

    
    [root@web1 ~]# ./ioping.sh 
    -----------------------------------------
    ioping.sh 0.9.8 - http://vbtechsupport.com
    by George Liu (eva2000)
    -----------------------------------------
              ioping.sh 0.9.8 MENU
    -----------------------------------------
    1. Install ioping
    2. Re-install ioping
    3. Run ioping default tests
    4. Run ioping custom tests
    5. Exit
    -----------------------------------------
    Enter option [ 1 - 5 ] 3
    -----------------------------------------
    Virtuzzo OR OpenVZ Virtualisation detected
    
    ***************************************************
    ioping code.google.com/p/ioping/
    ioping.sh 0.9.8
    shell wrapper script by George Liu (eva2000)
    http://vbtechsupport.com
    ***************************************************
    
    Virtuzzo or OpenVZ Virtualisation detected
    **********************************
    dd (sequential disk speed test)...
    **********************************
    dd if=/dev/zero of=testfilex bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 8.82299 s, 122 MB/s
    
    ************************
    starting ioping tests...
    ***************************************************
    ioping disk I/O test (default 1MB working set)
    ***************************************************
    disk I/O: /
    --- / (simfs /dev/simfs) ioping statistics ---
    5 requests completed in 4052.3 ms, 295 iops, 1.2 mb/s
    min/avg/max/mdev = 0.7/3.4/5.4/1.9 ms
    
    **********************************************
    seek rate test (default 1MB working set)
    **********************************************
    seek rate: /
    --- / (simfs /dev/simfs) ioping statistics ---
    252 requests completed in 3007.9 ms, 414 iops, 1.6 mb/s
    min/avg/max/mdev = 0.3/2.4/43.0/4.2 ms
    
    **********************************************
    sequential test (default 1MB working set)
    **********************************************
    -----------------------
    sequential: /
    --- / (simfs /dev/simfs) ioping statistics ---
    35 requests completed in 3061.8 ms, 13 iops, 3.2 mb/s
    min/avg/max/mdev = 20.2/78.2/220.2/42.1 ms
    -----------------------
    sequential cached I/O: /
    --- / (simfs /dev/simfs) ioping statistics ---
    358 requests completed in 3003.8 ms, 4736 iops, 1184.0 mb/s
    min/avg/max/mdev = 0.1/0.2/3.4/0.3 ms
    
    Centmin Mod Project (ngx_pagespeed + SPDY support)
  • eva2000eva2000 Member
    edited September 2011

    proof that dd tests for sequential don't show true story

    2nd VPS with 1.5GB and 15GB of SSD fast disk space on intel X25-M, while dd tests slower than buyvm, the random disk I/O is 3-4x times faster

    1.2/1.6MB/s vs 4.6/4.3MB/s

    # ./ioping.sh 
    ***************************************************
    ioping code.google.com/p/ioping/
    ioping.sh 0.9.8
    shell wrapper script by George Liu (eva2000)
    http://vbtechsupport.com
    ***************************************************
    
    Virtuzzo or OpenVZ Virtualisation detected
    **********************************
    dd (sequential disk speed test)...
    **********************************
    dd if=/dev/zero of=testfilex bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 11.5992 seconds, 92.6 MB/s
    
    ************************
    starting ioping tests...
    ***************************************************
    ioping disk I/O test (default 1MB working set)
    ***************************************************
    disk I/O: /
    --- / (simfs /dev/simfs) ioping statistics ---
    5 requests completed in 4009.3 ms, 1185 iops, 4.6 mb/s
    min/avg/max/mdev = 0.3/0.8/1.0/0.3 ms
    
    **********************************************
    seek rate test (default 1MB working set)
    **********************************************
    seek rate: /
    --- / (simfs /dev/simfs) ioping statistics ---
    1444 requests completed in 3001.3 ms, 1090 iops, 4.3 mb/s
    min/avg/max/mdev = 0.2/0.9/2.0/0.2 ms
    
    **********************************************
    sequential test (default 1MB working set)
    **********************************************
    -----------------------
    sequential: /
    --- / (simfs /dev/simfs) ioping statistics ---
    47 requests completed in 3007.6 ms, 16 iops, 4.0 mb/s
    min/avg/max/mdev = 45.0/62.7/217.5/23.2 ms
    -----------------------
    sequential cached I/O: /
    --- / (simfs /dev/simfs) ioping statistics ---
    2362 requests completed in 3001.2 ms, 2722 iops, 680.5 mb/s
    min/avg/max/mdev = 0.2/0.4/0.7/0.1 ms
    
    Centmin Mod Project (ngx_pagespeed + SPDY support)
  • The problem is that you should not use dd with an SSD. Both have a different way of handling disk-writing.

  • eva2000eva2000 Member
    edited September 2011

    so what be a better way for comparing ssd vs non-ssd for sequential disk i/o performance ?

    Reason here for ssd slower is the x25-m intel ssd sequential write speed rating is alot slower than probably the non-ssd vps. Not that ssd is slower.

    The comparison (ioping results) was to highlight random disk i/o

    Centmin Mod Project (ngx_pagespeed + SPDY support)
  • skagerrakskagerrak Member
    edited September 2011

    The reason is that SSD are normally much faster in reading than writing because before you can write on a SSD you need to delete the memory block-wise where you want to write on. Additionally, the SSD has the function of wear-leveling in order to save the blocks. The SSD will distribute the write-attemps on the whole SSD and not like on a HDD directly one after another. The writing speed of the SSD will steadily fall as soon as you have more and more data on it. Because the chip will not find an empty block, so it has to read the block first, delete its data and the write on it (read-modify-write). And this costs time. And as dd just copies sector-wise, it is not good for using it with an SSD.

    For SSD use hdparm -tT <device> instead.

    Thanked by 1AndreiGhesi
  • Yeah i know ssd's work differently from non-ssd disks, just wondering what would be a better tool for comparing sequential disk i/o performance for non-ssd vs ssds if dd isn't suited. Bonnie++, sysbench ?

    Of course detracting from the point with my ioping results, where random disk i/o is probably a better indicator of responsiveness/performance than sequential disk i/o in server environment.

    Centmin Mod Project (ngx_pagespeed + SPDY support)
  • skagerrakskagerrak Member
    edited September 2011

    I didn't try Bonnie++. I just used hdparm. Maybe I will have a look at Bonnie++.

    One might maybe also add, that the provider also has to take into consideration to optimise the use of a SSD in virtual environments, as well as your virtual linux has to. (noatime, data=writeback, elevator=noop)

  • yeah i usually use bonnie++ v1.03e http://www.coker.com.au/bonnie++/ but there's also 1.96 experimental release http://www.coker.com.au/bonnie++/experimental/ which seems to have memory to file allocation size requirements so tested file size needs to be twice memory allocated size

    Centmin Mod Project (ngx_pagespeed + SPDY support)
  • eva2000 said: Not that ssd is slower.

    ssd is slower at writing, but since there is no seek time to speak of they excel at reading data.

    Personally I do not think SSD is ready for VPS in production, no raid, hardware or software, support trim or GC. I'm sure a solution for this is around the corner.

    Hostigation High Resource Hosting - SolusVM OpenVZ/KVM VPS
  • ATA-Trim does not work on multiple SSDs (RAID), just with individual SSDs. There is, however, sometimes a "garbage collection" on some SSD-chipsets, but this simply works as a profane defrag. The problem with SSDs is that you should not exceed more than about 85-90% of space, otherwise you might run out of free blocks.

  • Then create partition of size 80% of the whole SSD and create software RAID1 using that partition - 20% of the SSD will never be used or written to. Am i missing something?

  • This still does not help that there is no ATA-Trim and the read-modify-write.

  • [root@claw ~]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 37.1493 s, 28.9 MB/s
    [root@claw ~]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 36.5373 s, 29.4 MB/s
    [root@claw ~]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 36.1596 s, 29.7 MB/s

    Nice and stable, as it's the only VPS on my test server.

    FreeVPS.us - The oldest post to host VPS provider
  • BuyVM OVZ 256MB:

    xxx@joy:~$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 10.2168 s, 105 MB/s
    

    Quality Servers Xen Eliminator:

    xxx@jasper:~$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 13.7222 s, 78.2 MB/s
    

    ZazoomVPS KVM 1GB:

    xxx@pearl:~$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 37.9963 s, 28.3 MB/s
    

    Pneuma Web Solutions - We provide full-service web application development & consultancy services.

  • @Infinity sorry or the delay in response, that vps is not in an active state as mentioned there's not much more I can provide as your not the actual client.

    Los Angeles VPS Inside of QuadraNet Datacenter http://semoweb.com/vps.html

  • SECURE DRAGON open vz 96mb:

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 24.0446 s, 44.7 MB/s

    dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync 65536+0 records in 65536+0 records out 1073741824 bytes (1.1 GB) copied, 27.6076 s, 38.9 MB/s

  • uptimevps 128 ovz 128:

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 12.0486 s, 89.1 MB/s

    dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync 65536+0 records in 65536+0 records out 1073741824 bytes (1.1 GB) copied, 13.2432 s, 81.1 MB/s

  • Ok, we get a little redundancy in here... ^-^

    La parole nous a été donnée pour déguiser notre pensée.

  • host1plus.com, 1 GB XEN really slow :(

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 42.3543 seconds, 25.4 MB/s
    

    slow motion

  • InfinityInfinity Retired Staff

    I guess that is average. But I never liked host1plus. Is that meant to be a "cloud" VPS?

    我是一个巨魔 (;

  • kiloservekiloserve Member
    edited September 2011

    From our BudgetBox XenPV plans...server is almost full, maybe 3 to 5 more clients left to go so the indicative speed should be relatively accurate for a full load server.

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 6.87557 seconds, 156 MB/s
  • tuxtux Member
    edited September 2011

    My cell phone

    Nokia-N900:~# time dd if=/dev/zero of=test bs=64k count=16k
    16384+0 records in
    16384+0 records out
    real    0m 51.34s
    user    0m 0.05s
    sys     0m 13.70s
  • SurmountedNETSurmountedNET Member
    edited September 2011

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 3.32847 seconds, 323 MB/s

    -Curtis from Surmounted.NET | Dallas, TX Xen Virtual Server goodness.
  • tux said: My cell phone

    But that isn't with sync :P

  • yomero said: But that isn't with sync :P

    If I try this command with conv parameter, it print this message

    Nokia-N900:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    BusyBox v1.10.2 (Debian 3:1.10.2.legal-1osso30+0m5) multi-call binary
    
    Usage: dd [if=FILE] [of=FILE] [bs=N] [count=N] [skip=N] [seek=N]
    
    Nokia-N900:~# 
  • Yep, I guess isn't the same dd :P

  • New results with gnu dd

    Nokia-N900:~# gdd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 tietuetta sisään
    16384+0 tietuetta ulos
    1073741824 tavua (1,1 GB) kopioitu 48,8937 sekunnissa, 22,0 MB/s
  • Wow, in your cellphone? xDD finland? :P

  • tuxtux Member
    edited September 2011

    Yes, in my cell phone. This is first linux phone from Nokia. It is designed in Finland and made in Korea. http://en.wikipedia.org/wiki/Nokia_N900

  • Yes, I have seen that little machine n_n Seems interesting. Also, the new N9 comes with a kind of Linux right? (Meego?) Offtopic :D

  • Heh.. raping the flash memory in the cellphone with dd. Nice ;)

  • yomero said: Also, the new N9 comes with a kind of Linux right? (Meego?)

    Yes, with Harmattan-Meego.

  • from Liquidweb cloud [root@host ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 11.838 seconds, 90.7 MB/s [root@host ~]#

  • BuyVM 512

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 28.9379 seconds, 37.1 MB/s

    @Francisco should I submit a ticket too?

    UptimeVPS 128

    # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 16.4348 seconds, 65.3 MB/s
    

    UptimeVPS 384

    ]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 16.9334 seconds, 63.4 MB/s
    
    My Own Universe | ChatX IRC Network irc.chatx.net:6667
Sign In or Register to comment.