Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Contabo introducing VPS with NVMe drives - Page 4
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Contabo introducing VPS with NVMe drives

12467

Comments

  • @dev_vps said:
    @jsg @adly @stevewatson301

    is there any linux script that can be used to determine if the VPS has any throttled IO limit set ?

    Use YABS, or if you know a bit about fio, use it instead..

    Thanked by 2adly dev_vps
  • special thanx to JSG, ... boost contabo nvme issue

    my conclusion, .. contabo is contabo as usual .... there is no significant difference than before

    Thanked by 1cybertech
  • dev_vpsdev_vps Member
    edited August 2021

    @jsg said:

    @dev_vps said:
    question is -

    is there any global I/O limit that applies to ALL the customers?
    Again, to all the customers

    That's something I also try to find out, but ...

    if someone has "high level contacts" with the provider, may be we can get definitive answer to this key question

    for the provider -
    sometimes it is better not the answer the question, as the answer may cause more damage.

  • jsgjsg Member, Resident Benchmarker

    @dev_vps said:
    @jsg @adly @stevewatson301

    is there any linux script that can be used to determine if the VPS has any throttled IO limit set ?

    Ask the yabs "experts" ...

    @dev_vps said:
    if someone has "high level contacts" with the provider, may be we can get definitive answer to this key question

    Let me see how far I can get. Yes, my contact certainly has that info or can get at it, but still, how many vCores they put on a HWT is something many (most?) providers try to avoid talking about.

    And no, sadly some AUP like "fair share is x %" is not really an answer but more of a figure they calculate and which based much on actual real use/load of the system, or in other words, on the fact that actually quite few users use their fair share. At best I'd take it as a crude and vague indicator.

    But I'm with you, if I had to decide about it, I would demand all providers to clearly tell in all offers how many VPS vCores there are on a HWT. But don't hold your breath ...

  • dedicatserver_rodedicatserver_ro Member, Host Rep
    edited August 2021

    @jsg said: Thanks for taking some time in between spying on customers

    • when you have no arguments 👍

    @all

    • you ask someone for answers who doesn't know what it's about and doesn't even know what to ask

    @dev_vps said:

    question is -

    is there any global I/O limit that applies to ALL the customers?Again, to all the customers

    • in order to answer you need to know what model of NVMe they have on the server and how many and if they apply any type of soft RAID-0 ( RAID 1 , 10 or else is BS on NVMe) if they work with internal storage
    • or what type of NVMe, type of transfer ( network...infiniband...which some don't seem to know and compares 10Gbps Networking with 100 Gbps infiniband or 100 Gbps RoCe v2...) and type of external storage ( single storage...cluster storage...)
    • the type of cpu, core number on VM ( in many cases you will be limited by the CPU , which some don't seem to know and compares 1 core / 512 MB with 8 core / 16GB )
    • how many VMs are on the server

    You will thus find out the max theoretical IO supported by the server storage intern or extern storage.

    NOW ! you can test your WM and can see if "is there any global I/O limit" considering :
    a. max IO from soft RAID
    b. max IO from connection of external storage ( one external storage can easy give you 50 GB/s , but if you have only 10 Gbps connection you lost the most )
    c. how many VMs are on the server and how many servers are on external storage

    Most lowend providers use NVMe internally with direct access, below is an example of what to expect, max. IO on 4k

  • jsgjsg Member, Resident Benchmarker

    You pretend to know so much, yet ...

    @dedicatserver_ro said:

    @jsg said: Thanks for taking some time in between spying on customers

    • when you have no arguments

    Nope. I just reserve the right to no comment on obvious BS and emotionally motivated hit attempts.

    @dev_vps said:

    question is -

    is there any global I/O limit that applies to ALL the customers?Again, to all the customers

    • in order to answer you need to know what model of NVMe they have on the server and how many and if they apply any type of soft RAID-0 ( RAID 1 , 10 or else is BS on NVMe) if they work with internal storage

    Interesting. So mirroring a drive is BS? Strange.

    Mirroring a drive - any drive - means significantly enhanced safety of data. customers data, well noted.
    And striping a drive isn't BS either. As you obviously don't know it, let me help you: striping very significantly decreases accesses to drives which is particularly desirable with VMs and also increases drive life time.

    • or what type of NVMe, type of transfer ( network...infiniband...which some don't seem to know and compares 10Gbps Networking with 100 Gbps infiniband or 100 Gbps RoCe v2...) and type of external storage ( single storage...cluster storage...)

    Tech porn yada yada. Your super-duper 100 Gb/s Infiniband Roce v2 remote storage is quite slow as compared to other providers solutions.

    • how many VMs are on the server

    Well, how many VM vCores per HWT do you put? Is a vCore of say your currently offered €3/mo VM 1/3rd of a HWT? 1/4? Tell us!

  • dedicatserver_rodedicatserver_ro Member, Host Rep
    edited August 2021

    @jsg said: Interesting. So mirroring a drive is BS? Strange.

    • go to school..... mirroring a PCIe ( you know the difference between SAS / SATA drive and NVMe drive)

    @jsg said: Tech porn yada yada. Your super-duper 100 Gb/s Infiniband Roce v2 remote storage is quite slow as compared to other providers solutions.

    • when you have no arguments 👍 ( Infiniband Roce v2 remote storage - It does not exists, but you have no way of knowing)
  • Hey look! @dedicatserver_ro is pissing in someone else's garden again!

    Thanked by 1jsg
  • @dahartigan said: Hey look! @dedicatserver_ro is pissing in someone else's garden again!

    If @jsg weren't so full of shit then he wouldn't have a chance to attack him.

    Thanked by 1iKeyZ
  • jsgjsg Member, Resident Benchmarker

    @dedicatserver_ro said:

    @jsg said: Interesting. So mirroring a drive is BS? Strange.

    • go to school..... mirroring a PCIe ( you know the difference between SAS / SATA drive and NVMe drive)

    I see, so a drive connected via PCIe magically can't break? And also a drive connected via PCIe magically does make copies of the data stored on it?

    It seems, you are the one who missed some classes ...

    @jsg said: Tech porn yada yada. Your super-duper 100 Gb/s Infiniband Roce v2 remote storage is quite slow as compared to other providers solutions.

    • when you have no arguments 👍 ( Infiniband Roce v2 remote storage - It does not exists, but you have no way of knowing)

    Oops!

    Wikipedia says:
    RDMA over Converged Ethernet (RoCE) is a network protocol that allows remote direct memory access (RDMA) over an Ethernet network. It does this by encapsulating an Infiniband transport packet over Ethernet.

    when you have no arguments ...

    Wikipedia is simple enough for you, I hope ;)

    Btw, no matter how you try to deflect by repeated personal attacks, albeit barrel bursts, your VPSs drive performance isn't exactly great.

    Oh and, how about answering my question?
    how many VM vCores per HWT do you put? Is a vCore of say your currently offered €3/mo VM 1/3rd of a HWT? 1/4? Tell us!

  • jsgjsg Member, Resident Benchmarker

    @stevewatson301 said:

    @dahartigan said: Hey look! @dedicatserver_ro is pissing in someone else's garden again!

    If @jsg weren't so full of shit then he wouldn't have a chance to attack him.

    ... says Mr. clueless whose new hero isn't exactly winning his self-inflicted war ...

  • jsgjsg Member, Resident Benchmarker

    @dahartigan said:
    Hey look! @dedicatserver_ro is pissing in someone else's garden again!

    He just does what he thought he's good at. That he was thinking wrong might be his other specialty.

    Thanked by 1dahartigan
  • dedicatserver_rodedicatserver_ro Member, Host Rep
    edited August 2021

    @jsg said: Wikipedia is simple enough for you

    • Looks like you're having trouble with.....

    @jsg said: It does this by encapsulating an Infiniband transport packet over Ethernet

    • is not written in Wikipedia , but nice to try to BS...

    RDMA over Converged Ethernet (RoCE) is a network protocol that allows remote direct memory access (RDMA) over an Ethernet network. It does this by encapsulating an IB transport packet over Ethernet. There are two RoCE versions, RoCE v1 and RoCE v2. RoCE v1 is an Ethernet link layer protocol and hence allows communication between any two hosts in the same Ethernet broadcast domain. RoCE v2 is an internet layer protocol which means that RoCE v2 packets can be routed.

    ....

    RoCE versus InfiniBand

    @jsg said: how many VM vCores per HWT do you put? Is a vCore of say your currently offered €3/mo VM 1/3rd of a HWT? 1/4? Tell us!

    • 2 vCore - you should have known

  • jsgjsg Member, Resident Benchmarker

    @dedicatserver_ro said:

    • is not written in Wikipedia , but nice to try to BS...

    RDMA over Converged Ethernet (RoCE) is a network protocol that allows remote direct memory access (RDMA) over an Ethernet network. It does this by encapsulating an IB transport packet over Ethernet. There are two RoCE versions, RoCE v1 and RoCE v2. RoCE v1 is an Ethernet link layer protocol and hence allows communication between any two hosts in the same Ethernet broadcast domain. RoCE v2 is an internet layer protocol which means that RoCE v2 packets can be routed.

    Stop the BS already. You know perfectly well that 'IB' stands for 'Infiniband' and actually links to it.

    @jsg said: how many VM vCores per HWT do you put? Is a vCore of say your currently offered €3/mo VM 1/3rd of a HWT? 1/4? Tell us!

    • 2 vCore

    [screenshot]

    I don't see confirmation of your assertion in your screenshot. A screenshot of a fully loaded node showing the number of real cores & HWT along with the number of VM cores would be more useful.

    But if the number you tell ('2') is true, kudos, that's decent.

  • dedicatserver_rodedicatserver_ro Member, Host Rep
    edited August 2021

    @jsg said: Stop the BS already. You know perfectly well that 'IB' stands for 'Infiniband' and actually links to it.

    FC, FC-NVMe[33][34]
    TCP, NVMe/TCP[35]
    Ethernet, RoCE v1/v2 (RDMA over converged Ethernet)[36]
    InfiniBand, NVMe over InfiniBand or NVMe/IB[37]
    The standard for NVMe over Fabrics was published by NVM Express, Inc. in 2016.[38][39]

    The following software implements the NVMe-oF protocol:

    Linux NVMe-oF initiator and target.[40] RoCE transport was supported initially, and with Linux kernel 5.x, native support for TCP was added.[41]
    Storage Performance Development Kit (SPDK) NVMe-oF initiator and target drivers.[42] Both RoCE and TCP transports are supported.[43][44]
    Starwind NVMe-oF initiator and target for Microsoft Windows, supporting both RoCE and TCP transports.[45]

    • Infiniband Roce v2 remote storage - It does not exists, but you have no way of knowing):
      a. Infiniband remote storage exist
      b. RoCe v2 remote storage exist

    • You don't seem to understand what you're reading

  • dedicatserver_rodedicatserver_ro Member, Host Rep
    edited August 2021

    @jsg said: I don't see confirmation of your assertion in your screenshot. A screenshot of a fully loaded node showing the number of real cores & HWT along with the number of VM cores would be more useful.

    But if the number you tell ('2') is true, kudos, that's decent.

    • let see your knowledge in the Opennebula cloud

  • @jsg said:

    @stevewatson301 said:

    @dahartigan said: Hey look! @dedicatserver_ro is pissing in someone else's garden again!

    If @jsg weren't so full of shit then he wouldn't have a chance to attack him.

    ... says Mr. clueless whose new hero isn't exactly winning his self-inflicted war ...

    Says the person making a million clock_gettime() syscalls and then is amazed that his benchmark isn't representative of real results, because he doesn't know about context switch and KPTI overheads.

  • @stevewatson301 said:

    @jsg said:

    @stevewatson301 said:

    @dahartigan said: Hey look! @dedicatserver_ro is pissing in someone else's garden again!

    If @jsg weren't so full of shit then he wouldn't have a chance to attack him.

    ... says Mr. clueless whose new hero isn't exactly winning his self-inflicted war ...

    Says the person making a million clock_gettime() syscalls and then is amazed that his benchmark isn't representative of real results, because he doesn't know about context switch and KPTI overheads.

    You just threw TNT on the fire..

  • jsgjsg Member, Resident Benchmarker

    @dedicatserver_ro said:

    @jsg said: I don't see confirmation of your assertion in your screenshot. A screenshot of a fully loaded node showing the number of real cores & HWT along with the number of VM cores would be more useful.

    But if the number you tell ('2') is true, kudos, that's decent.

    • let see your knowledge in the Opennebula cloud

    I'm not here to be tested - and certainly not by you who again and again offers tech mumbo jombo to distract from a simple fact: your disks aren't great performers.

    @stevewatson301 said:

    @jsg said:

    @stevewatson301 said:

    @dahartigan said: Hey look! @dedicatserver_ro is pissing in someone else's garden again!

    If @jsg weren't so full of shit then he wouldn't have a chance to attack him.

    ... says Mr. clueless whose new hero isn't exactly winning his self-inflicted war ...

    Says the person making a million clock_gettime() syscalls and then is amazed that his benchmark isn't representative of real results, because he doesn't know about context switch and KPTI overheads.

    Cute attempt but unfortunately (for you) you confused two similar sounding but actually very different calls, gettimeofday() in fio which is relatively expensive, which is why fio offers a mode massively reducing those calls, and clock_gettime() which is a very cheap/fast call and on modern systems pretty much just an ASM instruction plus the time info is obtained from memory shared between the kernel and user space, which, sorry to disappoint you does not involve crossing the kernel boundary ("a syscall").

    Not that I asked for it but thanks for confirming again what I said: you do not even know what you talk about.

  • bulbasaurbulbasaur Member
    edited August 2021

    @jsg said: obtained from memory shared between the kernel and user space

    Not since KPTI was implemented:
    https://en.wikipedia.org/wiki/Kernel_page-table_isolation

    I completely understand that urge to prove someone wrong, but it's a repeat of that HS saga once again, simply because you want to be right, not because you want to understand the truth.

    In addition, you're sidestepping the main issue too, which is why are you making so many clock_gettime() API in a CPU benchmark.

  • jsgjsg Member, Resident Benchmarker
    edited August 2021

    @stevewatson301 said:

    @jsg said: obtained from memory shared between the kernel and user space

    Not since KPTI was implemented:
    https://en.wikipedia.org/wiki/Kernel_page-table_isolation

    As wrt the matter: KPTI or not (my benchmarks are done on FreeBSD btw) fact is that gettimeofday() is massively more expensive and slower than clock_gettime().

    I completely understand that urge to prove someone wrong, but it's a repeat of that HS saga once again, simply because you want to be right, not because you want to understand the truth.

    For a start, you (and a few others) have given in to their urge to prove someone (me) wrong. That's how this thing began.

    Plus: easy to prove false: I made a small error (I think even in this very thread) and I "confessed" it right away and even apologized.

    You (and some others) however clearly and evidently are all about arguing against me. Just take this very post of yours. Fact is that gettimeofday() is massively more expensive and slower than clock_gettime(). Yet you only focused on finding some detail, any detail, no matter what, that seemed useful for showing that I was wrong. That is the only goal of this sharade. Maybe your next argument will be that on some exotic architecture there is little difference between those 2 calls; if that exists you'll find it and try to use it against me.

    You can not win this.stupid game. Simple reason: Chances are you continue to fail - and if you don't, just assumed you actually could show me weak point, I'd simply thank you and improve my benchmark.

    Pardon me, but your attempts are boring. Give it a rest.

    Edit, following your edit:

    In addition, you're sidestepping the main issue too, which is why are you making so many clock_gettime() API in a CPU benchmark.

    BS. The clock cycles used are tiny and meaningless compared to the cycles used by the tests (e.g. writing to the disk). Once more you talk about something that seems to be a good weapon but actually you simply don't understand and know what you are talking about.

  • adlyadly Veteran
    edited August 2021

    @jsg said: my benchmarks are done on FreeBSD

    ... and therefore are useless to 99% of people.

    @jsg said: You can not win this.stupid game.

    Sure you can, back your claims up with evidence rather than making long posts that bullshit around the matter.

  • If he can not win he will say BS.. classic..

  • dev_vpsdev_vps Member
    edited August 2021

    @jsg said:

    @dev_vps said:
    @jsg @adly @stevewatson301

    is there any linux script that can be used to determine if the VPS has any throttled IO limit set ?

    Ask the yabs "experts" ...

    @dev_vps said:
    if someone has "high level contacts" with the provider, may be we can get definitive answer to this key question

    Let me see how far I can get. Yes, my contact certainly has that info or can get at it, but still, how many vCores they put on a HWT is something many (most?) providers try to avoid talking about.

    And no, sadly some AUP like "fair share is x %" is not really an answer but more of a figure they calculate and which based much on actual real use/load of the system, or in other words, on the fact that actually quite few users use their fair share. At best I'd take it as a crude and vague indicator.

    But I'm with you, if I had to decide about it, I would demand all providers to clearly tell in all offers how many VPS vCores there are on a HWT. But don't hold your breath ...

    The key question that I was referring is this one

    is there a throttle for max I/O that applies to ALL the customers ?

    So far no reply from @contabo_m

  • adlyadly Veteran
    edited August 2021

    @dev_vps that's on you to judge. Contabo is extremely unlikely to admit to anything.

  • dev_vpsdev_vps Member
    edited August 2021

    @adly said:
    @dev_vps that's on you to judge. Contabo is extremely unlikely to admit to anything.

    Well, they are the one putting this quote on their website
    “This NVMe is three times faster than any other NVMe tested (at other providers)”

    That does put the onus on them to justify that quote with some benchmark numbers

    For a layman customer - does it mean writing 20gb data file (using their VPS with the latest NVMe) will take 1/3 of the time taken for any competitor provider VPS with NVMe storage?

  • dedicatserver_rodedicatserver_ro Member, Host Rep
    edited August 2021

    @jsg said: I'm not here to be tested - and certainly not by you who again and again offers tech mumbo jombo to distract from a simple fact: your disks aren't great performers.

    • thats is all ? after I give you all the required information you don't know anymore ?

    @jsg said: As for the disks the results are somewhat mixed and in part even disappointing. The reason seems to be that some kind of "Cloud disks" are used (along with a large cache). So the buffered NVMe results are really nice but the direct/sync results are poor and largely about at the level of a crappy SSD.

    • professional response....do you know what you have tested ? - crappy SSD - NVMe Samsung PM1725b
  • @chocolateshirt said:
    If he can not win he will say BS.. classic..

  • adlyadly Veteran

    @dev_vps said:
    Well, they are the one putting this quote on their website
    “This NVMe is three times faster than any other NVMe tested (at other providers)”

    That does put the onus on them to justify that quote with some benchmark numbers

    Well, I can say my penis is three times longer than any other penis measured. But I also don't have to show you my penis.

    Thanked by 2iKeyZ TimboJones
  • jsgjsg Member, Resident Benchmarker

    @adly said:

    @jsg said: my benchmarks are done on FreeBSD

    ... and therefore are useless to 99% of people.

    Yeah right, because FreeBSD magically make slow processors and memory fast and fast processors and memory slow and it also magically changes slow disks to fast ones and vice versa.

    @jsg said: You can not win this.stupid game.

    Sure you can, back your claims up with evidence rather than making long posts that beat around the matter.

    OK, I've had with you assholes.

    (a) You do not set the rulers for me. (b) guess whose opinion I care more about. That of significant provider or that of an obtrusive idiot? (c) I'm not selling something. I wrote a benchmark software in my time and with my work that I use for benchmarking and making reviews in my time and with my work (and lots of it) which I provide for free to our community - and you assholes have but demands and vile as a thank you?

    Nobody forces you to read my reviews, just like nobody forces me to read certain threads of certain providers whom I find obtrusive. So I simply don't read them, case closed. You could so the same, but no, instead of saying "I don't like your reviews but thanks for the effort" you fervently try to bash me, my software and my work and to abuse anything I say trying to find something, anything, you feel could be useful in your little private war against someone who has asked nothing from you but halfway decent behaviour.

    I'll be generous and not ask what you did for this community or what qualifications you have ...

Sign In or Register to comment.