Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Help us test our "LowEndBox" servers
New on LowEndTalk? Please Register and read our Community Rules.

Help us test our "LowEndBox" servers

edited April 2012 in Providers

Hello,

I am fairly new to LET community, I should introduce myself. Hi, I'm Eric Andrews, the owner of CubixCloud Web Services. We're currently working to update our website, services, and team. I thought I would find some better interest from the LowEndTalk community rather then posting on WebHostingTalk. (I hope this is allowed on LET, hehe)

We need about 5 users to runs some tests on our new Chicago OpenVZ servers. (information not currently available on our website) I would like to know your thoughts on overall speed, network, control, etc. Posting your results in this forum would probably benefit anyone interested as well as help us to get a good idea of what's going on from a customer point-of-view. If you have any suggestions, compliments, complaints, etc, we would love to hear them! First 5 to e-mail me with interest would get access to a 1GB "LowEndServer".

Please let me know if you have any questions about this beta test or about the servers, we would be happy to answer them here :)

E-mail: [email protected]

Thanked by 1Amfy
«13

Comments

  • Thanks for the emails and all of the interest - a few more server still available

  • I have sent you an email. Yet to receive a reply.

  • edited April 2012

    Sorry guys for the late replies, I had something to do. Getting back to replies now :)

    We have received more then enough interest in this. Thanks for your help - LET community is great :)

  • debugdebug Member

    Just got setup, I'll post a quick benchmark:

    -19:08:58- [[email protected] ~]# wget freevps.us/downloads/bench.sh -O - -o /dev/nu                                             ll|bash
    CPU model :  Intel(R) Core(TM) i7 CPU         950  @ 3.07GHz
    Number of cores : 4
    CPU frequency :  3066.818 MHz
    Total amount of ram : 1024 MB
    Total amount of swap : 0 MB
    System uptime :   4 min,
    Download speed from CacheFly: 78.0MB/s
    Download speed from Linode, Atlanta GA: 15.7MB/s
    Download speed from Linode, Dallas, TX: 15.3MB/s
    Download speed from Linode, Tokyo, JP: 7.25MB/s
    Download speed from Linode, London, UK: 4.84MB/s
    Download speed from Leaseweb, Haarlem, NL: 8.52MB/s
    Download speed from Softlayer, Singapore: 8.47MB/s
    Download speed from Softlayer, Seattle, WA: 21.9MB/s
    Download speed from Softlayer, San Jose, CA: 34.8MB/s
    Download speed from Softlayer, Washington, DC: 42.0MB/s
    I/O speed :  89.9 MB/s
    -19:10:40- [[email protected] ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdat                                             async
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 11.7794 s, 91.2 MB/s
    -19:11:44- [[email protected] ~]#
    
    Thanked by 1EricCubixCloud
  • flyfly Member

    io looks a bit low for a new server but i guess its bc everyone is runnign tests at the same time

    Thanked by 1EricCubixCloud
  • might be that it is running raid 1.

    I am no longer active here, find me at https://talk.lowendspirit.com

  • might be that it is running raid 1.

    These servers are running RAID1 with 1TB drives. Could you suggest another solution?

  • debugdebug Member

    ioping as well:

    -19:22:15- [[email protected] ~]# ioping -c 10 /
    4096 bytes from / (simfs /vz/private/114): request=1 time=0.1 ms
    4096 bytes from / (simfs /vz/private/114): request=2 time=0.1 ms
    4096 bytes from / (simfs /vz/private/114): request=3 time=0.1 ms
    4096 bytes from / (simfs /vz/private/114): request=4 time=0.2 ms
    4096 bytes from / (simfs /vz/private/114): request=5 time=0.1 ms
    4096 bytes from / (simfs /vz/private/114): request=6 time=0.1 ms
    4096 bytes from / (simfs /vz/private/114): request=7 time=0.1 ms
    4096 bytes from / (simfs /vz/private/114): request=8 time=0.1 ms
    4096 bytes from / (simfs /vz/private/114): request=9 time=0.1 ms
    4096 bytes from / (simfs /vz/private/114): request=10 time=0.1 ms
    
    --- / (simfs /vz/private/114) ioping statistics ---
    10 requests completed in 9002.1 ms, 8130 iops, 31.8 mb/s
    min/avg/max/mdev = 0.1/0.1/0.2/0.0 ms
    

    I'll run the dd again in a few hours, as well.

    Thanked by 1EricCubixCloud
  • We're still answering e-mail questions and sending out the login details to everyone who has e-mailed us. Sorry for the delay. Thanks to all so far for posting benchmark and other results :D

  • AnthonySmithAnthonySmith Member
    edited April 2012

    @EricCubixCloud that is rather worrying for a host to be asking if there is a better option than raid 1.

    Raid 10?

    Or I suppose raid 1 is fine if you only ever intend to put <20 VPS p/node but given that it is openvz and you describe 1GB as an LEB it is highly doubtful.

    If you really want to test your node put 50 users on it and see how well it does :)

    I am no longer active here, find me at https://talk.lowendspirit.com

  • Ioping looks good.

    Thanked by 1EricCubixCloud
  • @AnthonySmith said:

    Or I suppose raid 1 is fine if you only ever intend to put <20 VPS p/node but given that it is openvz and you describe 1GB as an LEB it is highly doubtful.

    If you really want to test your node put 50 users on it and see how well it does :)

    I understand what you're saying about RAID - I shouldn't have worded it that way, what I was really asking was your opinion. Thanks for your input. :)

    Our 24GB i7 nodes will have a maximum of 30 users.

  • To all who are interested, I should have mentioned, our 1GB servers will be around $12. We can let them got as a LowEnd sale for $7.

    To all who I gave a 256mb, 512mb, or 768mb test server to: here will be our regular pricing.

    256MB: $4
    512MB: $7
    768MB: $9
    1GB: $12

    Please let me know if you have any more questions. All thoughts and suggestions are taken into consideration. Please keep in mind we're still in a testing stage. :)

  • Why do all these beta tests thing have such a low user limit yet more people always seem to get them.

    I know, I'm Dale Maily.

  • @Taylor said: Why do all these beta tests thing have such a low user limit yet more people always seem to get them.

    Good question! Sorry, I figured 5 was enough but once I saw the interest I figured many more opinions and suggestions would be beneficial. We did have to cut it off at a certain point though.

    Have you e-mailed me?

  • ah ok...

    But not 30 on raid 1 though I hope :)

    I am no longer active here, find me at https://talk.lowendspirit.com

  • JacobJacob Member
    edited April 2012

    Honestly, RAID 1 Is not that bad aslong as you have decent harddrives and you keep the Data Write/Read Hoggers to a minimum, Otherwise you will have issues.

    I also disagree, We are running 40 Containers on this node and It is RAID 1.

    dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync && rm -rf test 65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 10.9559 seconds, 98.0 MB/s

    top - 19:04:06 up 6 days, 2:37, 1 user, load average: 1.79, 1.61, 1.36
    Tasks: 1323 total, 2 running, 1320 sleeping, 0 stopped, 1 zombie
    Cpu(s): 8.2%us, 8.9%sy, 0.0%ni, 81.6%id, 0.7%wa, 0.0%hi, 0.5%si, 0.0%st
    Mem: 16391800k total, 15138052k used, 1253748k free, 1064156k buffers
    Swap: 18481144k total, 1608k used, 18479536k free, 9933072k cached

    @AnthonySmith said: ah ok...

    But not 30 on raid 1 though I hope :)

    Thanked by 1EricCubixCloud

    AboveClouds • UK Company • UK Datacentre • UK Customer Support

    High Performance Pure SSD Cloud Hosting with a personal touch

  • @EricCubixCloud said: Have you e-mailed me?

    Of cause not I took your post at full value.

    I know, I'm Dale Maily.

  • @Jacob said: Honestly, RAID 1 Is not that bad aslong as you have decent harddrives and you keep the Data Write/Read Hoggers to a minimum, Otherwise you will have issues.

    I also disagree, We are running 40 Containers on this node and It is RAID 1.

    I agree, thanks so much for your input.

  • Additionally, you could upgrade to at least RAID5/6 with a couple more drives and double or triple the amount of containers you can comfortably fit on a modern node.

    Thanked by 1EricCubixCloud
  • @Damian said: Additionally, you could upgrade to at least RAID5/6 with a couple more drives and double or triple the amount of containers you can comfortably fit on a modern node.

    Thanks for your suggestion. We're going to see how it goes for now - around 30 users. The speeds have proven to be quite good considering it is a software RAID1 without a high cache like most expensive RAID card setup (the I/O file they are writing sticks in the cache and it shows an I/O speed of nearly 300MB/s or whatever it may be. However, it does not really effect actual performance from what I've seen.)

  • @EricCubixCloud said: However, it does not really effect actual performance from what I've seen.

    People around here like their large numbers, even though they'll never actually have any realistic utilization of them :)

    Thanked by 1NanoG6
  • @DotVPS said: Are you just starting VPS's?

    I have seen your domain's been registered a while?

    We began with web hosting and some other services. Now finally starting up a new node for VPS - focussing more on quality and rather then working with other companies, we're now beginning our own custom servers and such :)

    I have many other domains too - what does registration date mean now a days? :P

  • VPNshVPNsh Member, Provider
    edited April 2012

    Speeds seem pretty solid from what I've seen so far.. downloaded the 200mb cachefly file just to be different :P.

    `[[email protected] ~]# wget cachefly.cachefly.net/200mb.test

    --21:45:43-- http://cachefly.cachefly.net/200mb.test
    Resolving cachefly.cachefly.net... 205.234.175.175
    Connecting to cachefly.cachefly.net|205.234.175.175|:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 209715200 (200M) [application/octet-stream]
    Saving to: 200mb.test' 100%[=======================================>] 209,715,200 71.8M/s in 2.8s 21:45:46 (71.8 MB/s) -200mb.test' saved [209715200/209715200]
    `

    From what I've been seeing, i/o has been pretty consistent to these results too:

    `

    [[email protected] ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 12.0929 seconds, 88.8 MB/s
    `

    About to get off to bed, but I'll be running lots more tests tomorrow :), hope everything goes well Eric!

    Thanked by 1EricCubixCloud

    VPN.sh - Secure and Affordable VPN services - £9.99/year, Unlimited Bandwidth, 20+ Countries - Order Link

  • @liamwithers said: From what I've been seeing, i/o has been pretty consistent to these results too:

    Hey Liam! Glad to hear this - those speeds look great :)

    @liamwithers said: I'll be running lots more tests tomorrow :)

    This is great, please run as many tests as you would like :) Enjoy!

  • Here is my Initial Results. Not bad..

    http://pastie.org/3731947

  • AmfyAmfy Member

    My results are at https://piratenpad.org/ro/r.bjxyLxFqxjMwOHJ9 (maybe I will update some in a few hours)

  • @DanielM said: Here is my Initial Results. Not bad..

    Thanks!

    @Amfy said: (maybe I will update some in a few hours)

    Great results so far. Thanks for putting in the comments there. I see some users making use of almost the full 1Gbps :)

  • dnomdnom Member

    @EricCubixCloud said: I see some users making use of almost the full 1Gbps

    In, out or both? I'm wondering if anyone is testing the Rx side.
    Test file? @EricCubixCloud

  • @EricCubixCloud: How about some more test slots, this time for a small fee, say 1 "testing" month at 50% off or 75% off, etc.?


    I don't understand why more LEB providers don't have RAID 5/6/60 and prefer SW RAID1 or RAID10; the former gives you redundancy AND blazing read+write speeds. I assume it's because hardware controllers coupled with the decent number of drives needed is expensive and it's cheaper/faster to roll a 2/4-disk SW raid setup?

Sign In or Register to comment.