Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How is this even possible? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How is this even possible?

2»

Comments

  • DewlanceVPSDewlanceVPS Member, Patron Provider

    No, Its Impossible :)))

  • @DewlanceVPS said: No, Its Impossible :)))

    yeah, whatever

    Thanked by 2Infinity connercg
  • bitbit Member

    I got one in each location. Still need to set them up. However I may drop them as they don't use secure connection to the client area or their VPS CP.

  • @vdnet said: Considering 128GB of RAM for one box, they are going to be overselling the disk I/O and CPU like crazy before they sell out that box.

    We were putting 8xRAID10 SATA on 32GB of RAM. At that ratio they would need 32xSATA disks to match the disk I/O needed to power well performing virtual servers.

    Unless they have SSD cache? :)

  • @Corey if you use SSD Cached disks in the node and then have sans for the actual data you can make it work better.

  • @24khost said: @Corey if you use SSD Cached disks in the node and then have sans for the actual data you can make it work better.

    SAN???? wtf.... no then you have to have costly fiber links between the server and the san and a massively expensive san... might as well use local disks on all the servers with SSD cache.

  • its expensive but would give you your large iops!

  • @24khost said: its expensive but would give you your large iops!

    Might as well keep the local disks to have dedicated IOPS per server.... rather than shared IOPS on a SAN.

  • I know I was being funny Corey!

  • SSD cache is really not required if the setup is proper and you use good disks and most importantly an excellent controller. I am not against a pure SSD or a cached setup. We have tried the cached setup too however, it did not turn out to be significantly fruitful.

    Our new setups are able to deliver disk i/o of ~ 600 MBPS on pure SATA 2 disks. This would at most drop to 50% when the server is full. This means even when the server is full you should expect about 250 - 350 MBPS, I wouldn't say its anywhere near poor disk I/O. We are not competing against pure SSD setups and since our business model requires more disk space on VMs, these SATA setups work out perfect for us.

    Somewhere near December, we plan to start our new line of pure SSD offerings too. This would utilize 16 + drives of pure SSD with LSi MegaRAID 9260 16i. We are trying to keep the price margin very close to the SATA VM pricing with more than double the disk speed. We will give out a few test VMs to loyal customers and selected testers. Lets see how it goes on.

  • @web_host your problem probably lies with LSI MegaRaid controllers..... we used to purely use them but adaptec seems to have it down better. You'll have to spend a few extra bucks but it is well worth it.

  • web_hostweb_host Member
    edited October 2012

    I would choose to disagree. We have tested Adaptec and it didnt work out anywhere close to the LSI. There are numerous providers who may also choose to disagree with your statement. But I personally believe this is more of a personal choice factor. Most importantly, a RAID controller alone is not the sole contributing factor towards a proper setup.

    Besides what problem are we talking about?? For a provider who uses pure SATA drives, I really dont think the speeds are anywhere near unacceptable.

  • @web_host said: Besides what problem are we talking about?? For a provider who uses pure SATA drives, I really dont think the speeds are anywhere near unacceptable.

    The problem of SSD cache not turning up enough results for you.

  • web_hostweb_host Member
    edited October 2012

    @Corey said: The problem of SSD cache not turning up enough results for you.

    This is what I said earlier:

    "We have tried the cached setup too however, it did not turn out to be significantly fruitful."

    In case you did not comprehend it correctly, it means that there were very few noticeable differences however, it lead to a drastic increase in price and we did not find it suitable for our business model. You should remember that our objective is not to hit the highest disk i/o and to compete with any provider offering high numbers. We rather tend to establish a balance between price and performance and choose to go with a setup that's ideal from both perspectives.

    Furthermore, we are not the best in the industry in terms of server performance, support, pricing etc and neither do we have any such plans. Our objective is to setup a correct balance on performance, stability, uptime, support and most importantly pricing.

Sign In or Register to comment.