Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


New dedicated server with NVMe storage
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

New dedicated server with NVMe storage

FredQcFredQc Member
edited February 2017 in General

Hello all,

So I got a new dedicated server from OVH: Serveur Enterprise SP-128 - 128G E5-1650v4 2 x 450 GB NVMe

The question is, what speed should I obtain from these "disks" ... ?

The specs are quite good: http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-dc-p3520-series.html

For now this is what I get:

root@bhsnvme:~# hdparm -tT /dev/nvme0n1p1

/dev/nvme0n1p1:
 Timing cached reads:   25770 MB in  2.00 seconds = 12897.75 MB/sec
 Timing buffered disk reads: 510 MB in  0.86 seconds = 591.12 MB/sec

root@bhsnvme:~# hdparm -tT /dev/nvme1n1p1

/dev/nvme1n1p1:
 Timing cached reads:   25460 MB in  2.00 seconds = 12742.54 MB/sec
 Timing buffered disk reads: 1642 MB in  3.00 seconds = 547.03 MB/sec

root@bhsnvme:~# dd if=/dev/zero of=test_$$ bs=64k count=16k conv=fdatasync && rm -f test_$$
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 2.0256 s, 530 MB/s

I've not installed the server in RAID because I've read this would degrade the performance when using NVMe. It seems to me that those numbers are pretty low for a PCI-E x4 connections vs SATA-3. In fact, I got the same results from the Intel DC 3700 480GB. The NVMes should be faster, at least on the read side.

What do you think?

PS. A ticket is open with OVH regarding this, and they say that they are looking into it...

Thanked by 1doughmanes

Comments

  • ClouviderClouvider Member, Patron Provider

    You should get at least double out of those if hardware correctly installed.

  • Can you run the following:

    fio --filename=/dev/nvme1n1p1 --name=randwrite --ioengine=libaio --iodepth=16 --rw=randwrite --bs=4k --direct=1 --numjobs=8 --runtime=240 --group_reporting
    
  • Intel claims "up to" 1200MB/s for those drives but who knows what conditions.

    http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-dc-p3520-series.html

    Web search finds lots of tests and reviews:

    https://www.google.com/search?q=p3520+nvme+intel

  • Clouvider said: You should get at least double out of those if hardware correctly installed.

    That's what I'm thinking.

    serverian said: Can you run the following

    root@bhsnvme:~# fio --filename=/dev/nvme1n1p1 --name=randwrite --ioengine=libaio --iodepth=16 --rw=randwrite --bs=4k --direct=1 --numjobs=8 --runtime=240 --group_reporting
    randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
    ...
    fio-2.1.11
    Starting 8 processes
    Jobs: 8 (f=8): [w(8)] [100.0% done] [0KB/559.3MB/0KB /s] [0/143K/0 iops] [eta 00m:00s]
    randwrite: (groupid=0, jobs=8): err= 0: pid=8210: Tue Feb 28 21:17:06 2017
      write: io=144550MB, bw=616743KB/s, iops=154185, runt=240002msec
        slat (usec): min=0, max=4006, avg= 1.70, stdev= 1.22
        clat (usec): min=0, max=15144, avg=827.89, stdev=874.85
         lat (usec): min=6, max=15146, avg=829.66, stdev=874.84
        clat percentiles (usec):
         |  1.00th=[    8],  5.00th=[   10], 10.00th=[   12], 20.00th=[   18],
         | 30.00th=[   29], 40.00th=[   60], 50.00th=[  788], 60.00th=[ 1080],
         | 70.00th=[ 1384], 80.00th=[ 1736], 90.00th=[ 1912], 95.00th=[ 2040],
         | 99.00th=[ 2416], 99.50th=[ 2896], 99.90th=[ 7008], 99.95th=[ 7648],
         | 99.99th=[ 9024]
        bw (KB  /s): min=63904, max=97696, per=12.51%, avg=77128.63, stdev=4261.67
        lat (usec) : 2=0.01%, 4=0.01%, 10=3.93%, 20=18.39%, 50=16.08%
        lat (usec) : 100=3.57%, 250=2.27%, 500=2.76%, 750=2.59%, 1000=7.03%
        lat (msec) : 2=37.29%, 4=5.65%, 10=0.43%, 20=0.01%
      cpu          : usr=1.90%, sys=4.77%, ctx=26755564, majf=0, minf=66
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued    : total=r=0/w=37004918/d=0, short=r=0/w=0/d=0
         latency   : target=0, window=0, percentile=100.00%, depth=16
    
    Run status group 0 (all jobs):
      WRITE: io=144550MB, aggrb=616743KB/s, minb=616743KB/s, maxb=616743KB/s, mint=240002msec, maxt=240002msec
    
    Disk stats (read/write):
      nvme1n1: ios=0/36991876, merge=0/0, ticks=0/30570120, in_queue=30611052, util=100.00%
    

    Thanks for mentioning this. I've forgot to run this one. iops=154185 is within the specs finally. The concern now is the throughput? I've tested the config in RAID0 for "science" and can only achieve ~800MB/s read/write.

    willie said: Web search finds lots of tests and reviews

    Effectively, and those drives scored better, that's why I'm concerned ;-)

  • FredQc said: iops=154185

    This is a proper number. NVMe's won't get you more dd throughput just like that. IOPS is what matters with them. There is nothing wrong with that drive.

    That shitty dd command depends on CPU's single thread performance and memory's performance as well.

    You should be seeing bigger numbers if you use a bigger blocksize.

    Try running the below command see:

    dd if=/dev/zero of=test_$$ bs=1M count=1k conv=fdatasync && rm -f test_$$

    If you want even bigger numbers, you can use that SSD with an E3 CPU. I didn't check but if you see a bigger number somewhere with the same SSD using that same dd command, I would bet that it was connected on a server with a CPU with a better single thread performance than your E5.

    Thanked by 1vimalware
  • serverian said: Try running the below command

    root@bhsnvme:~# dd if=/dev/zero of=test_$$ bs=1M count=1k conv=fdatasync && rm -f test_$$1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 2.03833 s, 527 MB/s
    

    serverian said: That shitty dd command depends on CPU's single thread performance and memory's performance as well.

    serverian said: If you want even bigger numbers, you can use that SSD with an E3 CPU. I didn't check but if you see a bigger number somewhere with the same SSD using that same dd command, I would bet that it was connected on a server with a CPU with a better single thread performance than your E5.

    http://browser.primatelabs.com/v4/cpu/compare/1870519?baseline=1626836

    I don't think my CPU performance is in cause here...

  • Just a nice update for the ones who cares,

    I've tested the SSDs in rescue mode and got some nicer speeds:

    root@rescue:~# hdparm -tT /dev/nvme0n1p1
    
    /dev/nvme0n1p1:
     Timing cached reads:   25156 MB in  2.00 seconds = 12590.54 MB/sec
     Timing buffered disk reads: 3488 MB in  3.00 seconds = 1162.34 MB/sec
    

    I was not so paranoiac after all, heh.

    Then after I've tried to reinstall the server with Centos 7.3 from the manager instead of Proxmox to see if I could get the right speed... but it's been a major clusterfuck. It get stuck at

    Rebooting under fresh system ( 5 / 8 )

    Every times the support reset the installation for me but this time it's stuck like this for 8 hours. No answer from support. All I get is this in the manager:

    An intervention is being carried out on this server, bla bla bla
    

    My server is offline since more than 36 hours now. Now, I just want to push the button:

    Delete my service
    

    Thanks for letting me ventilate myself a bit

    /feels better... or not

    Thanked by 1inthecloudblog
  • WSSWSS Member

    Just leave it stuck- eventually it'll time out and they will reinstall it for you.

    If you want to get their attention, first install FreeBSD 11 on it, then attempt to reinstall CentOS. It'll hang, and they'll come have a look at in about an hour.

  • ClouviderClouvider Member, Patron Provider

    @FredQc if you're after one in the UK I have a server with NVMe built out, ready to go :-).

  • WSS said: Just leave it stuck- eventually it'll time out and they will reinstall it for you.

    It is stuck for a very long time now. Can't reinstall, can't reboot, can't access IPMI, can't do anything.

    I finally got an answer from the staff:

    Bonjour,

    Nos techniciens de niveau 2 sont présentement en discussion avec les
    spécialistes de distributions. Il semble que l'installation tombe en erreur en
    demande une petite /boot, pourtant vous avez bien ajouté cette partition. Nous
    avons tenté plusieurs tests ainsi que le changement des disques et sans
    succès. Le template semble avoir un problème et nous attendons la résolution
    de l'équipe distribution. Le problème sera compensé pour le temps perdu et
    payé et nous nous excusons pour les problèmes encourus par le délai
    d'installation. Il vous est toujours possible d'installer le serveur par IPMI
    avec votre propre ISO si vous n'avez pas le temps d'attendre pour une
    correction par nos équipes.

    Si vous avez d'autres questions, n'hésitez pas à nous contacter par billet, ou
    par téléphone au 1-855-684-5463. Nous sommes disponibles 24h/24, 7j/7 pour
    vous aider!

    TL;DR This looks like a major problem, and I'm fucked. Good news, they will gives me some credit for this and they are sorry.

    Clouvider said: if you're after one in the UK I have a server with NVMe built out, ready to go :-).

    Thanks, I'm sure it's good coming from you, but UK is too far away ;-)

    Thanked by 1Clouvider
  • WSSWSS Member

    Welp, that is strange. My guess is they're going to send a 1st level in there to do the installation until it happens again.

  • FredQcFredQc Member
    edited March 2017

    Well, I ordered the new SP-32 NVMe in BHS and everything is perfect

    root@bhs32:~# hdparm -tT /dev/nvme1n1p1
    
    /dev/nvme1n1p1:
     Timing cached reads:   35022 MB in  2.00 seconds = 17536.01 MB/sec
     Timing buffered disk reads: 3374 MB in  3.00 seconds = 1124.65 MB/sec
    

    As a bonus it got the "new" E3-1245v5 chip and DDR4.

    https://browser.geekbench.com/v4/cpu/2032494

    ;-)

    Thanked by 1Falzo
  • @Clouvider what are the specs for the server?

  • ClouviderClouvider Member, Patron Provider

    @racksx default spec we have at the minute is E3-1270V5, 2x 512 GB NVMe, 2x 2TB SATA III, 32 GB DDR4 ECC, 1Gbit/s uplink, 10 TB Bandwidth - let me know if you're interested, we can discuss the pricing :).

  • BopieBopie Member

    @Clouvider said:
    @racksx default spec we have at the minute is E3-1270V5, 2x 512 GB NVMe, 2x 2TB SATA III, 32 GB DDR4 ECC, 1Gbit/s uplink, 10 TB Bandwidth - let me know if you're interested, we can discuss the pricing :).

    Id be interested as well ;) but i know your not cheap and that spec is just wow so id say maybe in excess of £200 let me know anyway please @clouvider

Sign In or Register to comment.