New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I agree, but since vCPU is shared, the performance varies during the course of day. This is based on machine learning tests i have performed on 1.5GB plan.
Due to this, I plan to use VPS where vCPU is at least 50% allocated.
My experience with another VPS, where both logical cores are pinned to the VPS, is great.
Which VPS did you test?
[RYZEN-NVMe] 1.5 GB Ryzen KVM VPS
1x AMD Ryzen 3900X CPU Core
20 GB NVMe SSD Storage
1.5 GB DDR4 RAM
https://browser.geekbench.com/v5/cpu/7513083
@dustinc
another benchmark test against 1.5GB VPS
https://browser.geekbench.com/v5/cpu/7534528
@dustinc
Running Visual Studio Code in Linux desktop on 1.5GB Ryzen VPS ... though I would highly recommend using 2.5GB or higher VPS
@dustinc hi, how r u doing?
Do you think is it possible for you guys to make a special package for me?
I need about 5 vps in the range of the 20 USD/yr specs you normally offer examples:
[INTEL-SSD] 2.5 GB KVM VPS Special (have 2 of this already)
Ryzen Linux VPS Specials - 1GB Ryzen VPS (just bought one right now ORDER NUMBER 4669454108)
I guess you can get my email from this order number, feel free to contact me on that email.
or
Maybe you have a telegram/email where we can discuss this better?
I need 1 year plan.
please let me know
Ehm... u know. There is a support ticket system, a mail address, a live chat and the option of a private message. May I know why you chose the 'public offer thread request'?
Cheers.
Sure, you may:
Awesome
Hi @ivysaur -- looks great! Thank You for sharing this benchmark.
@dustinc is the double bw offer still valid on the old thread?
https://www.lowendtalk.com/discussion/168133/crazy-deals-crazy-giveaway-the-hottest-black-friday-thread-40-prizes-by-racknerd/p158
I see it was valid until 1 week ago, a couple people posted their invoice numbers but didn't get a confirmation from you if it is still valid or not...
Hi @dev_vps -- looks great! Thank You for sharing the benchmark.
Some locations need more work on network side: I just opened ticket for LA intel one of server, maybe you have some abusers or need to check what is the issue, also had i LA ipv6 stop working and I would use ipv6 tunnel if they would not limit me up to 5 per account I already have 3 accounts
I have 14 VPS, yeah only LA (i have two of them) have some issue with lag with ssh, one server is very slow other just have some huge lag, cpu is fine
Hi @coelhofaminto -- doing well, thanks for reaching out! I hope all is well with you too.
In terms of custom deals: We're flexible and open to custom solutions. Let's talk!
[email protected] -- I'll also send you an email.
Hi @reb0rn -- thank you for commenting. I would love to help.
Out of curiosity what type of network issues are you seeing in LA? Do you have any speed tests or MTR for review? Would love to help. Please share the ticket number here, and or email [email protected]
yeah one LA is very slow. just opened ticked support is on it
182379
but its nice support is fast on job
Clouvider | London, UK (10G) | 7.67 Mbits/sec | 5.93 Mbits/sec
Online.net | Paris, FR (10G) | 6.05 Mbits/sec | 4.40 Mbits/sec
WorldStream | The Netherlands (10G) | busy | busy
Biznet | Jakarta, Indonesia (1G) | busy | 3.01 Mbits/sec
Clouvider | NYC, NY, US (10G) | 6.04 Mbits/sec | 4.49 Mbits/sec
2nd LA I did not open ticket as speedtest is not so bad by there is quite more lag then on other servers
Hi @reb0rn -- thank you for replying. Happy to hear that support has been fast and is on it. I'm also going to oversee that ticket now, as I'm eager to know what is going on with that particular node.
Appreciate you bringing this to our attention.
Hi @reb0rn -- I was able to oversee the resolution of our ticket, thank you once again for bringing this to our attention.
For those curious, we were able to replicate the issue - and had facility hands replace the network cable connections to the node, which resolved this and now the node is able to properly auto-negotiate speeds and connectivity with the switch.
yeah all fixed, also offered me to migrate to SSD which is also not bad
so far your support is fast, and I hope in time you can get on level of hetzner with the platform kinks and stability
Hi @reb0rn – thank you for working with us on this, and happy to hear about that
We are in the process of phasing out our SSD cached infrastructure in Los Angeles and as others have mentioned on LET, in some of our previous threads, some customers are already starting to get migration notices letting them know they've been upgraded to pure SSD infrastructure at no extra charge. All of our other locations are already running on 100% pure SSD infrastructure, some even with NVMe infrastructure with our AMD Ryzen offerings.
With regards to stability, our slogan is "Introducing Infrastructure Stability" - and this is something we strive for, each and every day. This is not to say, issues won't happen, but we do take a lot of precautions to prevent issues from happening, by setting infrastructure redundancy. Some examples: redundant storage (RAID protected nodes), most nodes having dual NIC's as well as dual power supplies, in addition to redundancy on the facility level. For our scale/size, which is hundreds of physical VPS nodes and thousands of bare metal dedicated servers, the issues we have are rare, and in most cases, we are able to proactively spot and correct abnormal behaviors. We are quite transparent about this as well, and this is why we do not remove any incidents on our status page. For folks who are inclined to do so, you can even go to https://status.racknerd.com/ and click on "previous week" and keep going back to all the way when we first introduced the status page, and compare the number of issues we've had spanning over several months, which is minimal (something to consider is our scale/number of nodes we have deployed in that equation as well). We share every type of maintenance we do on the status page, even if it's a simple maintenance event such as replacing a network cable or in other types of incidents where most customers won't even notice a blip.
We proactively monitor all of our infrastructure utilizing both external tools such as Hetrix, in addition to in-house proprietary monitoring scripts. This helps us spot issues before they become problems, combined with our 24x7 staffing.
If you have any feedback or questions - you can always reach out to me directly at [email protected] as I'm eager to help and ensure that our customers are happy and receiving the service level they deserve.
Hold up... Didn't you say that you rent, and not own?
If so - are you waiting on your payment to go through customs, or what? Cause if you rent servers - there shouldn't be anything that is delayed due to shipping, right?
Hi @NobodyInteresting - appreciate you keeping an eye!
I believe I covered it, in the quote "As a company, we do have owned infrastructure and still continue to grow our assets - but it is no secret that a majority of our infrastructure for our KVM VPS product line is leveraged, rather than owned."
As with anything on its way from the states to Amsterdam, usually and sadly delays are to be expected due to customs.
So by leveraged - you mean rented, correct?
Are you renting those servers with Colo Crossing in NL, or are they owned by you, and using the CC IP space and rack space?
@NobodyInteresting I believe the term @dustinc used was "leveraged," i.e. they used leverage to get the financing needed to purchase the machines.
So, with little money down, they can purchase large amounts of servers and pay financing every month (or smth like that).
It's dangerous to get in a situation where you take on too much leverage without operating profits to sustain it (like Shadow.tech... they took on like $50m to buy NVIDIA Quadros, and now without enough money to pay financing debt, 2CRSi wants debt to be repaid)
We have several deals worked out - though a majority is rented. This was covered previously within the thread.
So if you are renting from CC in NL - then what are you specifically waiting on to arrive, and what is held up by customs?
Hi @NobodyInteresting -- we'd as you could imagine are awaiting hardware, once arrived, the gear would then be inspected, and stress tested, and put into production. We're focused on expansion. Be sure to keep an eye on our social media, we are announcing expansions as we go. We by the way have yet to announce publicly, but we've just expanded our Ryzen product line in Dallas Perhaps, Amsterdam is in the making?
Lots of good things happening.
Thanks, but thats not what I asked to be honest..
You said you rent your hardware from CC. Which is cool.
But then you said you are waiting for your hardware to arrive at the DC and is being held at customs.
So my question is - is this your owned hardware being colocated at their DC? Or are you waiting on them to replenish their hardware in the DC, so you could rent more?
Because you are making it sound like your own hardware is being colocated there, but then you mentioned that its rented, and it is starting to sound a little confusing.
Hi @NobodyInteresting -- I'm sorry that I've confused you.
As I mentioned before, we have a mix - whether this batch is rented/owned, really doesn't matter. If I said it was owned, or if I said it was rented, it doesn't matter I believe? I've mentioned we have a mix, and most of it is rented.
We are waiting for the hardware to arrive, and as I stated before, sadly and unfortunately with each deployment we've done in Amsterdam we've experienced delays. So delays are to be expected when expanding in Amsterdam.
I'm happy that we've had enough interest to sell out, but also learning as we go in terms of keeping enough production inventory readily available.
Well actually it does matter. Quite a bit tbh.
If they are rented - then you are waiting on ColoCrossing to replenish their supplies. So its more of a "we are waiting on more dedicated servers to be stocked by CC, so we can rent them out".
If you own it - then you are only using CC for their IP space and colocation. This is when you can run into the issue of your hardware being stalled due to customs. Which is completely fine, and it happens, specially in these pandemic days.
The two, however, are very different. Specially when we consider the pricing of your services. You can somewhat afford to have a metric shit ton of customers that pay the prices that they do on colo`ed hardware.
But when you rent the hardware - this is a huge big red flag. Because:
a) You need to cram a LOT of customers to break even;
b) You sell at a loss, cause you cool like that, and want to support the community?;
c) You sell a lot of 1/2/3 year offers, and then an year or two later- you hit the sack and close shop, with everyone that has already paid for their services sucking on their thumb. Specially considering your background with AlphaRacks and the whole fiasco last year - this would be indeed very worrying.
You are making it sound like you own the hardware that is on its way there, but now you sound like its rented by CC. So... Which one is it?