Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Are you happy with ssdnodes NVMe speed?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Are you happy with ssdnodes NVMe speed?

seenuseenu Member

Hello,

if anyone using SSDNodes's performance plans i.e. NVMe disks....

i want to know...are you happy with their speed?

my bench tests shows their speed is almost similar to ssd drives.

honestly i am stunned with these speeds https://www.lowendtalk.com/discussion/comment/3002030/#Comment_3002030 and under impression that...SSDnodes are actually using SSDs but advertising as NVMes.

thanks.

Comments

  • BopieBopie Member

    What speeds are you getting with ssdnodes, Can you post a benchmark at all??, My nvme plans show only around 600MB/s on DD but once you do a test in FIO then you see the advantage of NVMe vs SSD

  • wait till u bench hetzner NVMes

    Thanked by 1Bopie
  • BopieBopie Member

    @cybertech I noticed their bench was bad as well, When I mentioned it to there support they also mentioned that DD is not an accurate benchmark for NVMe ;)

    Thanked by 1cybertech
  • cybertechcybertech Member
    edited July 2019

    @Bopie said:
    @cybertech I noticed their bench was bad as well, When I mentioned it to there support they also mentioned that DD is not an accurate benchmark for NVMe ;)

    yeah , their nuremberg ones perform the best. falkenstein and heisinki ones dont seem to have good IOPS

    but again it could be problem of centos 7

  • BopieBopie Member

    @cybertech I always use the latest Epel Kernal so I don't seem to have any issues my self, which I believe currently I'm using 5.2 but this is also because I use BBR ;)

  • sgheghelesgheghele Member
    edited July 2019

    Bopie said: once you do a test in FIO then you see the advantage of NVMe vs SSD

    This, 100x this. So many people use bench.sh (which uses dd) and then complain about performance of SSD and NVMe arrays. Or celebrate on provider threads because they see 1000 mbit/s average dd sequential write speed. Unless these numbers go into ridiculous values, say 60 mbit/s, they tell nothing about SSD and NVMe performance.

    Thanked by 2cybertech sin
  • @Bopie said:
    @cybertech I always use the latest Epel Kernal so I don't seem to have any issues my self, which I believe currently I'm using 5.2 but this is also because I use BBR ;)

    Oh, I thought yours is good because of ZFS haha.

  • @sgheghele said:

    Bopie said: once you do a test in FIO then you see the advantage of NVMe vs SSD

    This, 100x this. So many people use bench.sh (which uses dd) and then complain about performance of SSD and NVMe arrays. Or celebrate on provider threads because they see 1000 mbit/s average dd sequential write speed. Unless these numbers go into ridiculous values, say 60 mbit/s, they tell nothing about SSD and NVMe performance.

    And that separates amatuers/hobbyists from the professionals/providers.

  • @Bopie said:
    @cybertech I always use the latest Epel Kernal so I don't seem to have any issues my self, which I believe currently I'm using 5.2 but this is also because I use BBR ;)

    @Bopie give advice!
    If we use bbr (net.core.default_qdisc=fq && net.ipv4.tcp_congestion_control=bbr) should we tweak other tcp params such as:
    net.ipv4.tcp_max_tw_buckets=400000
    net.ipv4.tcp_tw_reuse=1
    net.core.netdev_max_backlog=20480
    net.ipv4.tcp_rmem = 4096 25165824 25165824
    net.core.rmem_max = 25165824
    net.core.rmem_default = 25165824
    net.ipv4.tcp_wmem = 4096 65536 25165824
    net.core.wmem_max = 25165824
    net.core.wmem_default = 65536
    net.core.optmem_max = 25165824

    I just read that with google BBR we do not need to tweak tcp stack. But what your opinion?
    Thanks.

  • BopieBopie Member

    @SashkaPro said:

    @Bopie said:
    @cybertech I always use the latest Epel Kernal so I don't seem to have any issues my self, which I believe currently I'm using 5.2 but this is also because I use BBR ;)

    @Bopie give advice!
    If we use bbr (net.core.default_qdisc=fq && net.ipv4.tcp_congestion_control=bbr) should we tweak other tcp params such as:
    net.ipv4.tcp_max_tw_buckets=400000
    net.ipv4.tcp_tw_reuse=1
    net.core.netdev_max_backlog=20480
    net.ipv4.tcp_rmem = 4096 25165824 25165824
    net.core.rmem_max = 25165824
    net.core.rmem_default = 25165824
    net.ipv4.tcp_wmem = 4096 65536 25165824
    net.core.wmem_max = 25165824
    net.core.wmem_default = 65536
    net.core.optmem_max = 25165824

    I just read that with google BBR we do not need to tweak tcp stack. But what your opinion?
    Thanks.

    I'm not a network guy in the least, I pay someone to do that for me but the difference we saw from no BBR and then installing it without any tweaks was amazing, We went from around 10MB/s to 700MB/s, However, this was based on the host we used which is pure Cogent known for congestion

  • williewillie Member

    They are huge machines with a lot of vm's splitting up those iops, so of course it won't be anything like a dedi. Also lots of not so fast ssd's use the nvme interface now. But, most of these ssd/nvme smaller plans are still great value. ssdnodes is possibly the odd one. It's super low priced, but performance can be variable based on reviews I've seen. I'd go with something more mainstream if it were me.

  • seenuseenu Member

    @willie said:
    They are huge machines with a lot of vm's splitting up those iops, so of course it won't be anything like a dedi. Also lots of not so fast ssd's use the nvme interface now. But, most of these ssd/nvme smaller plans are still great value. ssdnodes is possibly the odd one. It's super low priced, but performance can be variable based on reviews I've seen. I'd go with something more mainstream if it were me.

    can you recommend any mainstream providers with NVMe?

    Actually i thought SSDNodes service will be much better because of their website, comparison with DO, their claims on speeds but when i benchtest it...results show otherwise.

    thanks.

  • williewillie Member

    seenu said: can you recommend any mainstream providers with NVMe?

    I wouldn't worry much about the SSD interface. It's more about the server load. That said, @MikeA's NVMe servers are great. I also had fast results with a Vultr high frequency instance but I don't know what kind of SSD it uses.

  • Take a look at hostdoc plans. They have been pretty consistent for me.

Sign In or Register to comment.