Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


What setup Digital Ocean, Vultr like using?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

What setup Digital Ocean, Vultr like using?

Hi,

I was wondering to know what platform Digital Ocean and Vultr like are using for very user friendly easy to use? I am sure Digital Ocean use their own platform. But something like that, is there any software/control panel for it?

How about the hardware setup to achieve something like that? Like having storage unit, connected with the fibres to the node for processing etc..?

Thanks,

Comments

  • We use a platform that was built entirely in-house. Of course, it relies on many open source projects. For example, our virtualization is QEMU and not specifically developed in-house. The management systems for it, the cloud panel, these are all in-house projects.

    For hardware, like most we use full systems. Your server is provisioned on a full computer with all parts internal to it. It is connected to storage systems for additional things like block storage (Volumes), but your base storage is a RAID10 array in the same server.

  • @DigitalOceanJD Who the hell are you? I'm tagging the real @jar :P

  • @DigitalOceanJD said:
    We use a platform that was built entirely in-house. Of course, it relies on many open source projects. For example, our virtualization is QEMU and not specifically developed in-house. The management systems for it, the cloud panel, these are all in-house projects.

    For hardware, like most we use full systems. Your server is provisioned on a full computer with all parts internal to it. It is connected to storage systems for additional things like block storage (Volumes), but your base storage is a RAID10 array in the same server.

    Great, I did not know someone from DO is on here!
    Is it possible please to provide more information on these additional storage setup? I am interested to know.
    Let's say what if a node crash or goes down. What happens? The VM's going down as well or does it switch to the additional storage?

  • Yakooza said: Is it possible please to provide more information on these additional storage setup?

    Happy to answer anything that I can :)

    Yakooza said: Let's say what if a node crash or goes down. What happens? The VM's going down as well or does it switch to the additional storage?

    If the hardware fails and the drives are dead, your data is gone with them. Always keep backups. If your data is important enough to you, keeping it on more than one system (load balanced setup, for example) is also important. The idea here is that some people simply do not want to pay the cost of such redundancy, because their data is not that important to them. So what we want to provide is that bare minimum, but then provide you with paths to grow into something more redundant as your needs grow to that point.

    Thanked by 1Yakooza
  • edited March 2019

    DigitalOceanJD said: some people simply do not want to pay the cost of such redundancy, because their data is not that important to them

    To expand on this, I really like the phrase "treat your servers like cattle, not pets." A single instance should be disposable in a good strategy, so at no point would you care about the data on a single server. Alternatively, you may just not care about it because you have it backed up and it's not production data.

    There are typically two competing thoughts on failover:

    1. Above the virtualization layer.
    2. At the software layer.

    It is my opinion that #1 is a pipe dream in which reality simply does not match theory. You can't have hardware failover, with today's technology, that functions perfectly for all software use cases. In fact, it may even be damaging to certain use cases unless the software is built in such a way as to be prepared for it.

    Now #2, however, I believe is perfect. This is where you build failover based on your individual needs, and tailor it to your software. This is things like:

    • DNS failover
    • Load balancer
    • Database replication
    • File system sync

    These are where a failover strategy can really shine, and these are handled at the user level. They can be made easier with product, of course. With that kind of strategy, you want your end points to be inexpensive and barebones, because you take savings from that and spend it where it's most effective, in assisting with creating failover at the software layer.

    Thanked by 1Yakooza
  • desperanddesperand Member
    edited March 2019

    I love more Vultr because of no hidden costs like DO have / has.
    I got fooled by DO because of hidden costs, but I paid all of this shit, because I do not love when I owe someone money (even with fake reason, for me easier to pay this money and forever move away from bad people, instead of proving or trying to win in case, when the whole situation is policy and strategy of the company for fooling people) the amount of money not too big to pay, but after that I will never ever back to you (to be clear mine friends faced the same).

    I write it here just as a revenge, that you guys have/had hidden costs which you added in 2018 (while for years there was no hidden costs at all) about which users should to know, but I checked a week ago and looks like you remove it from all accounts because I think of criticism about that. Also, can't even find a page with these hidden costs on your KB anymore, but it was there. Really nice tricky thing, but you removed it, and it's good. But trust can't ve returned after that.

    What about the topic and OP:
    https://github.com/LunaNode/

    check this, I never ever used anything so well working with no bugs at all for years of using their services. Really, even with vultr / do I had issues related to small bugs, while these guys have their own custom solutions written in GO, which just WORKS, everything Easy to understand, easy to manage, easy to know what you do, how you do, how much you need to pay in the end of a month, and can correct your budgets, etc.

    Try to check their code, I hope you will enjoy, it's really something great share.

    Thanked by 1corbpie
  • perennateperennate Member, Host Rep
    edited March 2019

    re desperand: we use OpenStack on LunaNode, our Lobster project was an open-source front-end for various systems but it was mostly for resellers and unfortunately we are no longer working on it since resellers just want simple WHMCS module (and we don't use it, although a lot of the Lobster code is similar to what we do use).

    But OpenStack is open source! https://github.com/openstack but cannot say that it has no bugs... heh.

    Thanked by 2uptime eol
  • DigitalOcean use the JarPlatform which is mostly driven by Everclear and Red Bull.

    Thanked by 3uptime eol mrKat
  • edited March 2019

    We have no costs that are not published, never have. We don't even really have a KB (effectively this has been surpassed by product docs, no one uses the KB on the support portal) so no idea what you're referring to. I'm always happy to resolve misunderstandings. My offer stands to review any situation and help resolve it @desperand. Best wishes.

  • doghouchdoghouch Member
    edited March 2019

    @DigitalOceanJD said:
    We use a platform that was built entirely in-house. Of course, it relies on many open source projects. For example, our virtualization is QEMU and not specifically developed in-house. The management systems for it, the cloud panel, these are all in-house projects.

    For hardware, like most we use full systems. Your server is provisioned on a full computer with all parts internal to it. It is connected to storage systems for additional things like block storage (Volumes), but your base storage is a RAID10 array in the same server.

    PSA: Don't lie to us! We ALL know that DO stores all their customer data on punchcards because their VC funding got cut and they couldn't afford to pay for power on the database servers. Punchcards use NO power but take a million years to be read so logins now take ~3 days as each request forces their 'in-house' system to read through every punch card to verify the username + password.

    /s

  • @doghouch said:

    so logins now take ~3 days as each request forces their 'in-house' system to read through every punch card to verify the username + password.

    Which they hide by telling you to check your email for the 2fa email. This gives them some time to run the punchcards through the reader.

  • eoleol Member

    <3

Sign In or Register to comment.