Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Time4vps 'launch' KVM 2gb real-world benchmark/review (updating as I go )
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Time4vps 'launch' KVM 2gb real-world benchmark/review (updating as I go )

vimalwarevimalware Member
edited November 2016 in Reviews

So I got another invite email for @time4vps KVM.

The order link in the email said 1x2.4 Ghz instead of their previous 2x1.8 Ghz display text from BlackFriday(for the base 2gb kvm).

So I took the bait. Paid with paypal.

I chose their Ubuntu 16.04 template from the billing panel.

When the VM came up after 3-4 mins, I did the standard 'apt-get update'.

VM seemed snappy overall. Could be just a new node.

Checked /proc/cpuinfo and found a single 1.8Ghz listed. shrugs

However htop says 2 'CPUs' are present. So this could be openvz7(tm)-induced weirdness.

onto some real-world tests....

time apt-get dist-upgrade -y

real    3m35.350s
user    0m54.860s
sys     0m14.116s

involves unpacking the latest 4.4 distro kernel and the headers too.

#

Next ,

**openssl Intel hardware acceleration test for aes128 **

openssl speed -evp aes128    
OpenSSL 1.0.2g  1 Mar 2016
built on: reproducible build, date unspecified
options:bn(64,64) rc4(16x,int) des(idx,cisc,16,int) aes(partial) blowfish(idx) 
compiler: cc -I. -I.. -I../include  -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -m64 -DL_ENDIAN -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack -Wall -DMD32_REG_T=int -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -DECP_NISTZ256_ASM
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128-cbc     463989.67k   517986.45k   538030.41k   528736.75k   526833.57k

RSA signing

openssl speed rsa
--truncated-for-brevity-
                  sign    verify    sign/s verify/s
rsa  512 bits 0.000063s 0.000004s  15858.9 282157.2
rsa 1024 bits 0.000129s 0.000008s   7771.1 122249.0
rsa 2048 bits 0.000831s 0.000024s   1204.0  41141.6
rsa 4096 bits 0.005428s 0.000083s    184.2  12082.3

I have not seen any E3 go above 900 signs/sec per core.

E5's never cross 850.

More voodoo magic I do not understand here. shrugs

Some kind of 'transparent parallelization' apparently.

#

Next , Docker pulls

Let's try to pull the most popular ubuntu image. (60MB-ish maybe)

time docker pull ubuntu                                                                   
Using default tag: latest
latest: Pulling from library/ubuntu

aed15891ba52: Pull complete 
773ae8583d14: Pull complete 
d1d48771f782: Pull complete 
cd3d6cd6c0cf: Pull complete 
8ff6f8a9120c: Pull complete 
Digest: sha256:35bc48a1ca97c3971611dc4662d08d131869daa692acb281c7e9e052924e38b1
Status: Downloaded newer image for ubuntu:latest

real    0m10.771s
user    0m0.032s
sys     0m0.020s

That is fast enough for me. (remember: docker images get decompressed too )

next,
ioping

ioping -c 10 .
4 KiB from . (ext4 /dev/sda1): request=1 time=425 us
4 KiB from . (ext4 /dev/sda1): request=2 time=419 us
4 KiB from . (ext4 /dev/sda1): request=3 time=518 us
4 KiB from . (ext4 /dev/sda1): request=4 time=561 us
4 KiB from . (ext4 /dev/sda1): request=5 time=571 us
4 KiB from . (ext4 /dev/sda1): request=6 time=455 us
4 KiB from . (ext4 /dev/sda1): request=7 time=504 us
4 KiB from . (ext4 /dev/sda1): request=8 time=607 us
4 KiB from . (ext4 /dev/sda1): request=9 time=419 us
4 KiB from . (ext4 /dev/sda1): request=10 time=536 us

--- . (ext4 /dev/sda1) ioping statistics ---
10 requests completed in 9.01 s, 1.99 k iops, 7.79 MiB/s
min/avg/max/mdev = 419 us / 501 us / 607 us / 65 us

and

ioping -RD .

--- . (ext4 /dev/sda1) ioping statistics ---
301 requests completed in 3.01 s, 100 iops, 400.3 KiB/s
min/avg/max/mdev = 7.37 ms / 9.99 ms / 13.0 ms / 598 us

AHA. I run into the first real anti-benchmark wall.

But, I don't care at this point. :)

Update 1:
Some inbound network tests:

USA

wget -O /dev/null http://mirror.incero.com/100mb.test
/dev/null                 100%[====================================>] 100.00M  11.2MB/s    in 12s     
2016-11-29 13:36:25 (8.34 MB/s) - ‘/dev/null’ saved [104857600/104857600]

wget -O /dev/null http://lg.lax.us.ultravps.eu/100MB.test
/dev/null                 100%[====================================>]  95.37M  12.6MB/s    in 8.9s    
2016-11-29 13:39:33 (10.7 MB/s) - ‘/dev/null’ saved [100000000/100000000]

EU

wget -O /dev/null http://ping.online.net/1000Mo.dat
/dev/null                 100%[====================================>] 953.67M  36.7MB/s    in 30s     
2016-11-29 13:35:07 (32.1 MB/s) - ‘/dev/null’ saved [1000000000/1000000000]

 wget -O /dev/null http://lg.ams.nl.ultravps.eu/100MB.test
/dev/null                 100%[====================================>]  95.37M  29.4MB/s    in 4.2s    
2016-11-29 13:37:20 (22.8 MB/s) - ‘/dev/null’ saved [100000000/100000000]

wget -O /dev/null http://lg-dro.liteserver.nl/1024MB.test
/dev/null                 100%[====================================>] 976.56M  45.6MB/s    in 22s     
2016-11-29 13:38:42 (44.0 MB/s) - ‘/dev/null’ saved [1024000000/1024000000]

The 400mbit hard-cap is easily reached with liteserver's network blend :)

update 2: FIO

one line review: BRUTAL iops wall.
Stopped the test after few seconds.

./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.15
Starting 1 process
^Cbs: 1 (f=1): [m(1)] [0.1% done] [288KB/112KB/0KB /s] [72/28/0 iops] [eta 02h:55m:11s]
fio: terminating on signal 2
Jobs: 1 (f=1): [m(1)] [0.1% done] [124KB/40KB/0KB /s] [31/10/0 iops] [eta 03h:06m:08s] 
test: (groupid=0, jobs=1): err= 0: pid=1913: Tue Nov 29 13:51:53 2016
  read : io=2968.0KB, bw=305389B/s, iops=74, runt=  9952msec
  write: io=1036.0KB, bw=106598B/s, iops=26, runt=  9952msec
  cpu          : usr=0.00%, sys=0.60%, ctx=916, majf=0, minf=9
  IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=742/w=259/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=2968KB, aggrb=298KB/s, minb=298KB/s, maxb=298KB/s, mint=9952msec, maxt=9952msec
  WRITE: io=1036KB, aggrb=104KB/s, minb=104KB/s, maxb=104KB/s, mint=9952msec, maxt=9952msec

Disk stats (read/write):
  sda: ios=737/263, merge=0/2, ticks=605380/7584, in_queue=616652, util=99.13%

As you can see, the 100 iops limit is for both Read+write total.

I do wish I knew this beforehand.

400-1000 iops is more where what I'd cap it at.

This isn't a storage server after all.

Even those had 400 iops at launch time.

I was hoping to use this in conjunction with my storage vps(es) with them for a hot/cold data solution. (logging and friends )

Hope @time4vps is listening.

Update 3

Obligatory Geekbench_x86_64 run : https://browser.geekbench.com/v4/cpu/1190022


More real-world bench suggestions welcome.

Maybe we could create a LET docker image with a bench suite run.

I'll post some fio 4k/8k tests in a bit for database performance. (see above)

I DONT post sequential dd io tests. (hint: look up iops on wikipedia)

Comments

  • LayerLayer Member
    edited November 2016

    Questions:

    1) Does Openvz 7 allows extreme overselling kvm like normal Openvz ?

    2) Can you also install os via iso (encrypt disk).

  • vimalwarevimalware Member
    edited November 2016

    @Layer said:
    Questions:

    1) Does Openvz 7 allows extreme overselling kvm like normal Openvz ?

    I don't know about this. Someone else more knowledgeable could comment on this please.

    2) Can you also install os via iso (encrypt disk).

    I did not see this option.

    It's probably coming soon, along with snapshots like they say.

    There is a VNC button that gives me a VNC ip:port:password and that's it.

  • @Layer said:
    Questions:

    1) Does Openvz 7 allows extreme overselling kvm like normal Openvz ?

    2) Can you also install os via iso (encrypt disk).

    I'm not sure what you mean, but yes, KVM is oversel-able, like OpenVZ. The only thing you can really oversell with KVM is RAM, and like disk. (I've never personally tried overselling KVMs, so my experience level with it isn't that great)

    For your second question: yes, you can install another OS within a KVM.

    I'll tag @William since I'm sure he knows more about this

  • time4vpstime4vps Member, Host Rep

    Thank you @vimalware for a nice thread! We are watching it very closely.

    Layer said: Questions:

    1) Does Openvz 7 allows extreme overselling kvm like normal Openvz ?

    Useful link for you that will answer most of your concerns about virtualization platforms:

    https://openvz.org/Comparison

    2) Can you also install os via iso (encrypt disk).

    We do not support custom ISO's yet.

    vimalware said: There is a VNC button that gives me a VNC ip:port:password and that's it.

    This is normal. We recommend VNC access only for critical usage cases. Usually customers should use standard RDP, or SSH access.

  • gleertgleert Member, Host Rep

    Layer said: 1) Does Openvz 7 allows extreme overselling kvm like normal Openvz ?

    Yes:
    1. Memory over-commit
    2. Disk over-commit

  • you can oversell KVM massively just like openvz

  • bacloudbacloud Member, Patron Provider

    It is possible to oversell all virtualizations

    Thanked by 1Waldo19
  • gleertgleert Member, Host Rep

    SolusVM KVM also??

  • vimalware said: So this could be openvz7(tm)-induced weirdness.

    You said it's a KVM, did I miss something?

  • @ratherbak3d said:

    vimalware said: So this could be openvz7(tm)-induced weirdness.

    You said it's a KVM, did I miss something?

    KVM hardware virtualization on openvz7/virtuozzo7 platform.

    see https://openvz.org/Comparison

  • vimalware said: KVM hardware virtualization on openvz7/virtuozzo7 platform.

    see https://openvz.org/Comparison

    Alright cool. I'm out of touch when it comes to OpenVZ. Seems interesting, though.

  • deadbeefdeadbeef Member
    edited November 2016

    @vimalware said:

    Great review!

    As you can see, the 100 iops limit is for both Read+write total.

    This is very disappointing, lower than the storage vps :o

  • @deadbeef said:

    @vimalware said:

    Great review!

    As you can see, the 100 iops limit is for both Read+write total.

    This is very disappointing, lower than the storage vps :o

    Tbh, I am glad that there are I/O limits so that we know what we will get and not thrash the disk for other users. However, we should be allowed to burst or buy more iops, sort of like AWS

    Thanked by 1vimalware
  • WilliamWilliam Member
    edited November 2016

    doghouch said: I'm not sure what you mean, but yes, KVM is oversel-able, like OpenVZ. The only thing you can really oversell with KVM is RAM, and like disk. (I've never personally tried overselling KVMs, so my experience level with it isn't that great)

    No, you can oversell anything with KVM.

    CPU %, CPU ticks (limit by either process or by qemu internal) - Memory (shared pages/deduplication and IO MB/s and IOPS (process or qemu internal) - Network BW and Network PP/S (again process or qemu internal) - Disk (thin provision, glusterfs, rbd, zfs, zfs via iscsi.... all possible to be thin provisioned).

    KVM can be exactly as oversold as OVZ, just not as simple.

  • ** FIO benchmark update **

    Time4vps has clearly paid attention to feedback and raised that stone age IOPS limit.

    So, I randomly decided to run the fio benchmark again with same 75/25 r/w workload on a 4GB testfile pre-generated manually from /dev/urandom.

    TOTAL iops limit seems to be a very ample 5000+ now. (read + write iops)

    I'm seeing 8000 on some repeat tests.

    Here's the raw output of benchmark.

        ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
        test: (g=0): rw=randrw, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=64
        fio-2.15
        Starting 1 process
        Jobs: 1 (f=1): [m(1)] [100.0% done] [160.5MB/55104KB/0KB /s] [2567/861/0 iops] [eta 00m:00s]
        test: (groupid=0, jobs=1): err= 0: pid=443: Mon Dec  5 19:12:28 2016
          read : io=3075.6MB, bw=265546KB/s, iops=4149, runt= 11860msec
          write: io=1020.5MB, bw=88105KB/s, iops=1376, runt= 11860msec
          cpu          : usr=1.79%, sys=13.69%, ctx=28189, majf=0, minf=10
          IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
             submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
             complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
             issued    : total=r=49209/w=16327/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
             latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
       READ: io=3075.6MB, aggrb=265546KB/s, minb=265546KB/s, maxb=265546KB/s, mint=11860msec, maxt=11860msec
      WRITE: io=1020.5MB, aggrb=88105KB/s, minb=88105KB/s, maxb=88105KB/s, mint=11860msec, maxt=11860msec
    
    Disk stats (read/write):
      sda: ios=49208/16331, merge=0/2, ticks=708508/11768, in_queue=720596, util=99.29%
    
    Thanked by 4deadbeef sin Yura bersy
  • any outage meanwhile? production ready?

  • MikePTMikePT Moderator, Patron Provider, Veteran

    Just ordered one, but seems limited still, at least in the service description:
    RAID Speed 50 MB/s, 100 IOPS

  • Could just be a massive ssd cache.

    But I'm cool with that.

  • Can install Windows = kvm

  • IOPS update: 400 limited again

    after a mystery forced reboot of the KVM today, I decided to check the IOPS again on a hunch

    ioping -RD .
    
    --- . (ext4 /dev/sda1) ioping statistics ---
    1.20 k requests completed in 3.00 s, 400 iops, 1.56 MiB/s
    

    Looks like 400 is the new system limit for random IO.

    No complaints. I just hope it doesn't go lower.

Sign In or Register to comment.