Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Looking for some early feedback on our first MVP: cloud services comparison website
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Looking for some early feedback on our first MVP: cloud services comparison website

arielsearielse Member
edited March 2017 in Reviews

Hi guys, we have just launched our website, and we are looking for some early feedback or suggestions:

https://www.cloudbeard.io/

There are a lot of work to do, some things we already know:

  1. Right now we have only one benchmark of every service, but the idea is to benchmark every service, on every region, every x days/hours automatically to get a good average of all the "performance indicators".
  2. We need to add a search box :P
  3. We have only four providers, but the idea is to integrate more, right now for several reasons we only work on providers with:
    • Hourly billing
    • Third party API (oauth would be great)
    • Docker support
  4. We will use the credits from the referral links to benchmark more services, frequently.
  5. It is really complex to assign a score to a cloud service, we know that is no the same if you are using an instance for caching on memory, or video processing, or load balancing, or heavy database operations, but for now we are trying to create a scoring method for comparing "general purpose" cloud computing instances. This will be more flexible soon.

I hope you like it.

Thanks in advance!

Ariel

PS: If you are interested you can follow us on Facebook: /cloudbeardio or Twitter: @cloudbeardio

«1

Comments

  • teamaccteamacc Member
    edited March 2017

    How exactly do you calculate the scores for geekbench with one core? Seems like 2399 score earns you a 4.0, 2438 earns you 4.9 and 2451 earns you 7.0

    Very confusing, given that they're less than 3% apart.

    Also, clicking through the comparisons does NOT update the traffic breakdown after the first switch it would seem. It just does not reset to zero when switching to a provider with pre-paid bandwidth.

  • arielsearielse Member
    edited March 2017

    @teamacc said:
    How exactly do you calculate the scores for geekbench with one core? Seems like 2399 score earns you a 4.0, 2438 earns you 4.9 and 2451 earns you 7.0

    Very confusing, given that they're less than 3% apart.

    Every score depends on the monthly price, but yes maybe this is confusing, I'm thinking on removing the scores on every benchmark item to avoid confusion, only displaying the final instance score.

    Also, clicking through the comparisons does NOT update the traffic breakdown after the first switch it would seem.

    Yes you are right, I will fix it, thanks!

  • Well, paying 10x more does not make your single core perf 10x faster. CPUs are currently limited somewhere between 2.5 and 3k score.

  • Some sort of Filters view would be cool too. Like ascending or descending scores.

  • arielsearielse Member
    edited March 2017

    @teamacc said:
    Well, paying 10x more does not make your single core perf 10x faster. CPUs are currently limited somewhere between 2.5 and 3k score.

    Our logic is, if you pay 10x more, and you receive the same geekbench performance, the CPU score is lower (it affects the final score someway).

  • @rivermigue said:
    Some sort of Filters view would be cool too. Like ascending or descending scores.

    Thanks for your feedback, yes I think we will add a "Compute Services" view with filters and sorting options.

  • Doesn't seem that informative. Need to know about cpu throttling on long computations, availability, dependence on luck of the draw, etc. I know a common strategy with AWS is or was to spin up an instance, test its speed, and possibly throw it back in the pool and take another one until you get a fast one, which probably means few noisy neighbors.

    Need to know iops, storage cost, etc.

    It's better for the instance listings to say the same stuff that the vendors say (i.e. # of [v]cores, cpu GHz, disk/ssd space, etc). so people can see it at a glance, instead of making people click around.

    Make everything downloadable as JSON so users can bypass the web presentation. People will scrape it anyway so you might as well embrace it.

  • NekkiNekki Veteran

    'Cloudbeard'

    WTAF

    Thanked by 1deadbeef
  • nulldevnulldev Member
    edited March 2017

    This page: "https://www.cloudbeard.io/providers" bugs out when you resize it:

    Thanked by 2arielse pnklz
  • @willie said:
    Doesn't seem that informative. Need to know about cpu throttling on long computations, availability, dependence on luck of the draw, etc. I know a common strategy with AWS is or was to spin up an instance, test its speed, and possibly throw it back in the pool and take another one until you get a fast one, which probably means few noisy neighbors.

    Hi willie! regarding noisy neighbors, we will analyze all benchmarks among regions, and try to find averages, I think in general the numbers doesn't vary a lot.

    The main problem with AWS is that they have burstable CPU instances (Amazon T2), which are very tricky, I think we will add a warning on those instances.
    We will add more information about the number of benchmarks, quantity of regions tested, etc.

    Need to know iops, storage cost, etc.

    We will add more information and options about Block Storage (for example in Amazon, you can select different types of block storage disks, with different performance).

    It's better for the instance listings to say the same stuff that the vendors say (i.e. # of [v]cores, cpu GHz, disk/ssd space, etc). so people can see it at a glance, instead of making people click around.

    Yes I think we need to improve the navigation through the site, here you can see that information:

    https://www.cloudbeard.io/providers/vultr

    Make everything downloadable as JSON so users can bypass the web presentation. People will scrape it anyway so you might as well embrace it.

    Yes in the future we can probably add a public API. Thanks for your comments!

  • nulldev said: This page: "https://www.cloudbeard.io/providers" bugs out when you resize the page:

    Hi nulldev, yes we have to revise the css responsiveness of the whole website, thanks for the detail.

  • @Nekki said:
    'Cloudbeard'

    WTAF

    The best accommodation for your infrastructure stack:

    CloudBeard Sketch

    We are working on a great product which uses all this data as an "inventory" :) I will keep you posted.

    Thanked by 1yomero
  • elgselgs Member

    Website is done, now we need some data. :)

  • Interesting about the 96gb vultr instance. The biggest on my vultr console is 64gb, 16 vcores. I have one running right now using the $20 signup credit that will expire in a few days. I'm compiling gcc 6.3 with it (takes around 1/2 hour on an i7-3770) and can run some other test if you want. I've had it a little under an hour and will probably go past the 1 hour mark (i.e. 2 hours billed) before the gcc compile finishes. I really shouldn't care since the credit is about to expire, but there's another compute task that I might try it on, to use up the remaining credit.

  • @willie said:
    Interesting about the 96gb vultr instance. The biggest on my vultr console is 64gb, 16 vcores. I have one running right now using the $20 signup credit that will expire in a few days. I'm compiling gcc 6.3 with it (takes around 1/2 hour on an i7-3770) and can run some other test if you want. I've had it a little under an hour and will probably go past the 1 hour mark (i.e. 2 hours billed) before the gcc compile finishes. I really shouldn't care since the credit is about to expire, but there's another compute task that I might try it on, to use up the remaining credit.

    It depends on the Server Location, for example the 96GB Vultr instance is not in New Jersey, but it's on Atlanta:

    Vultr $640 instance

    Our process is fully automatized (using the Vultr API), and it's really fast so we can run every benchmark in a fully clean environment without spending so much money. Thanks for your help anyway :)

  • Oh I see. You still get dinged for 1 hour minimum right? Still I guess that each $10 referral credit covers a bunch of tests. That was interesting about the 4 and 8 core instances being more cpu per dollar than the large instances. Any idea what the hardware is? It occurs to me that one way to avoid being my own noisy neighbor is to run stuff on multiple locations.

    The dashboard kept showing my 16 core instance at 0% cpu even though I had all cores loaded...

  • WSSWSS Member

    @arielse said:

    @Nekki said:
    'Cloudbeard'

    WTAF

    The best accommodation for your infrastructure stack:

    CloudBeard Sketch

    We are working on a great product which uses all this data as an "inventory" :) I will keep you posted.

    Where's the random XMLRPCs raping WordPress?

    Thanked by 1JahAGR
  • @willie said:
    Oh I see. You still get dinged for 1 hour minimum right? Still I guess that each $10 referral credit covers a bunch of tests. That was interesting about the 4 and 8 core instances being more cpu per dollar than the large instances. Any idea what the hardware is? It occurs to me that one way to avoid being my own noisy neighbor is to run stuff on multiple locations.

    The dashboard kept showing my 16 core instance at 0% cpu even though I had all cores loaded...

    Exactly, everything is ready to start benchmarking automatically, but I'm afraid that something can go wrong and I will get a bill of $50K at the end of the month :P I will start with the smaller instances.

    I don't know what hardware is Vultr using, from the database I can see:

    Virtual CPU a7769a6388d5 (???)

    But the benchmarks are really similar to Linode, so I suppose they are using something similar:

    Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz

  • Interesting. That's an 18 core chip so if it's a dual socket system, that would be 36 cores, 72 threads/vcores, and I'll guess 256gb of memory. I wonder why they don't sell single instances that size, like AWS does (around $5/hour iirc). I'm actually liking Vultr quite a lot because of all that hardware. Now if I can just build up enough promo credit I can set up some kind of job queueing system and spew my stuff across dozens of Vultr servers instead of just my two cheap dedis...

  • @WSS said:

    @arielse said:

    @Nekki said:
    'Cloudbeard'

    WTAF

    The best accommodation for your infrastructure stack:

    CloudBeard Sketch

    We are working on a great product which uses all this data as an "inventory" :) I will keep you posted.

    Where's the random XMLRPCs raping WordPress?

    Not a big fan of WordPress (and PHP), but WordPress powers almost 27.6% percent of the entire internet:

    https://w3techs.com/technologies/details/cm-wordpress/all/all

    So that's why he is playing ping-pong with MySQL, he earned it.

  • WSSWSS Member

    @arielse said:
    Not a big fan of WordPress (and PHP), but WordPress powers almost 27.6% percent of the entire internet:

    https://w3techs.com/technologies/details/cm-wordpress/all/all

    So that's why he is playing ping-pong with MySQL, he earned it.

    That doesn't answer my question.

  • @willie said:
    Interesting. That's an 18 core chip so if it's a dual socket system, that would be 36 cores, 72 threads/vcores, and I'll guess 256gb of memory. I wonder why they don't sell single instances that size, like AWS does (around $5/hour iirc). I'm actually liking Vultr quite a lot because of all that hardware. Now if I can just build up enough promo credit I can set up some kind of job queueing system and spew my stuff across dozens of Vultr servers instead of just my two cheap dedis...

    The reason why the biggest Vultr instance scores so low:

    https://www.cloudbeard.io/providers/vultr/compute/98304

    it's because for $640.00 per month, you can have something better, for example, in OVH:

    MG-512 - $635

    https://www.ovh.com/us/es/servidores-dedicados/enterprise/170mg4.xml

    • 2x Xeon E5-2680v4 (28 cores, 56 threads)
    • 512GB DDR4 ECC 2133 MHz
    • SoftRaid 2x450GB NVMe

    That's why in general, the score goes down when you scale on cloud computing providers:
    CloudBeard Vultr instances

    A 36 cores / 72 threads / 256 GB memory Vultr instance would cost a fortune.

    Thanked by 1WSS
  • I think you'd have to compare to the OVH public cloud rather than a dedi: https://www.ovh.com/us/es/public-cloud/instances/precios/

    They don't seem to have a high-cpu 256gb instance either. Closest is probably the EG-120 (120gb ram, 32 vcores, $448/mo or $1.4xx/hour, or was that euros). Remember OVH's monthly price works out to 50% of what it would be at the hourly rate over a full month.

    Of course renting any of these things for a full month would be a money pit ;-). If I needed a monthly machine that size there are some far more affordable ones at Hetzner and Online. The idea is to do a batch job that currently takes about 2 weeks on a Hetzner i7-3770. I recently got an Online E3-1230v3 which is about the same speed, so using both would get it to 1 week. But spinning up ten midsized Vultrs would get it to less than a weekend, I think.

  • @willie said:
    I think you'd have to compare to the OVH public cloud rather than a dedi: https://www.ovh.com/us/es/public-cloud/instances/precios/

    They don't seem to have a high-cpu 256gb instance either. Closest is probably the EG-120 (120gb ram, 32 vcores, $448/mo or $1.4xx/hour, or was that euros). Remember OVH's monthly price works out to 50% of what it would be at the hourly rate over a full month.

    Yes we are adding OVH soon, it is the next one on our list.

    Of course renting any of these things for a full month would be a money pit ;-). If I needed a monthly machine that size there are some far more affordable ones at Hetzner and Online. The idea is to do a batch job that currently takes about 2 weeks on a Hetzner i7-3770. I recently got an Online E3-1230v3 which is about the same speed, so using both would get it to 1 week. But spinning up ten midsized Vultrs would get it to less than a weekend, I think.

    Yes, I think if you figure out how to queue jobs between ten midsized Vultrs, it will be the best bet. I have done something similar in the past using Python, Redis and Celery.

    You will need something like Docker or Chef/Puppet/SaltStack/Ansible to get that running quickly.

  • Yes I use Ansible and would probably just make a Vultr disk image (whatever they're called) to initialize new instances. That would launch something pulls and runs tasks from a centralized Redis queue someplace. The other way I've done it is simply spawn remote tasks by ssh from a centralized master. That allows more coordination at the control end.

    There are currently a bunch of OVH Geekbench timings on browser.geekbench.com if that matters.

    I'd be interested in seeing tests of the new Scaleway Xeon instances (15gb and 30gb) if you want to do those. I played with the 60gb a little and I felt it had about the same cpu as my i7, but was much more expensive because of the larger ram. So it would be good to see how the smaller ones hold up.

    Thanked by 1arielse
  • @willie said:
    There are currently a bunch of OVH Geekbench timings on browser.geekbench.com if that matters.

    Where did you find it? Because I'm searching "OVH" without results:

    http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=ovh

    I'd be interested in seeing tests of the new Scaleway Xeon instances (15gb and 30gb) if you want to do those. I played with the 60gb a little and I felt it had about the same cpu as my i7, but was much more expensive because of the larger ram. So it would be good to see how the smaller ones hold up.

    Yes Scaleway is a really interesting provider, I think the only one which is offering BareMetal servers with hourly pricing:

    Scaleway BareMetal

    Also as they have three different "service levels" (starter cloud servers, intensive cloud servers, and baremetal) it is not clear how they perform.

  • FalzoFalzo Member

    arielse said: Where did you find it? Because I'm searching "OVH" without results:

    http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=ovh

    of course not, you need to search for either the processor or part of the systemname, which should contain 'openstack' for ovh (and maybe others):

    https://browser.geekbench.com/v4/cpu/search?dir=desc&q=openstack&sort=multicore_score

    not all of those are OVH of course... but from there you might want to match number of core and clock rate to the products of their cloud range. in the details you can see with how much RAM they are equipped and you also might have a look at the bios string which sometimes carries -ovh at the end.

    I also benched through most of their public cloud offers, you'll find the links in this thread: https://www.lowendtalk.com/discussion/comment/2061744/#Comment_2061744

    the scaleway ones are harder to identify as they mostly just show qemu standard pc as systemname, so you need to know what cpu they might use to search for it, like D-1531 for their intensive load range: https://browser.geekbench.com/v4/cpu/search?q=D-1531
    (easy to identify in the details via the non-standard RAM-sizes 15/30/60/120GB)

    or C2750 for the baremetal thingies https://browser.geekbench.com/v4/cpu/search?q=C2750 - but much more harder to be sure, because there a lot of other system out there running those CPUs...

    Thanked by 1arielse
  • yomeroyomero Member
    edited March 2017

    The search function at Geekbench is a mess. It's better to use Google, lol

  • AnthonySmithAnthonySmith Member, Patron Provider

    Interesting, nice design however:

    It is your provider not listed here? Let us know
    In order to add a new provider, it must meet the following requirements:
    Hourly billing
    Public API
    Docker support
    

    So you built a site for about 10 hosts in total? interesting.

  • @Falzo said:

    arielse said: Where did you find it? Because I'm searching "OVH" without results:

    http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=ovh

    of course not, you need to search for either the processor or part of the systemname, which should contain 'openstack' for ovh (and maybe others):

    https://browser.geekbench.com/v4/cpu/search?dir=desc&q=openstack&sort=multicore_score

    not all of those are OVH of course... but from there you might want to match number of core and clock rate to the products of their cloud range. in the details you can see with how much RAM they are equipped and you also might have a look at the bios string which sometimes carries -ovh at the end.

    I also benched through most of their public cloud offers, you'll find the links in this thread: https://www.lowendtalk.com/discussion/comment/2061744/#Comment_2061744

    the scaleway ones are harder to identify as they mostly just show qemu standard pc as systemname, so you need to know what cpu they might use to search for it, like D-1531 for their intensive load range: https://browser.geekbench.com/v4/cpu/search?q=D-1531
    (easy to identify in the details via the non-standard RAM-sizes 15/30/60/120GB)

    or C2750 for the baremetal thingies https://browser.geekbench.com/v4/cpu/search?q=C2750 - but much more harder to be sure, because there a lot of other system out there running those CPUs...

    Hi Falzo, thanks for the details!, I think we would better integrate through their APIs and benchmark everything, we have seen lot of benchmarks searching in this forum and in google, but we are concerned about how those benchmarks were done, if there were working on a clean environment without background processes, etc. That's one of the reasons we are taking a different approach to the now dead serverbear, or serverscope, and we prefer doing benchmarks automatically, instead of receiving anonymous contributions.

Sign In or Register to comment.