Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


7 seconds to fill your 10GB VPS - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

7 seconds to fill your 10GB VPS

2»

Comments

  • @cosmicgate said: Are there any places in particular to go for food in Singapore? Now that shopping is out of my list, I don't know where to go anymore other than universal studios. (my gf and I will be there for 4days)

    Look for Coffee Shops / Eateries / Kopitiams around for cheap and good food. You'll also have more choices. You will also find them in large shopping malls.

    @Kenshin said: Never travel on cab between 5:30pm-7:30pm on weekdays,

    This has changed. After 6pm to midnight you need to pay extra 25% surcharge throughout the week.

    @cosmicgate, you can refer this website for taxi fare details.

    I'd recommend you to travel in Bus / MRT. That will save you a lot. Buy a Ez-Link card and it will help you save even more.

  • @biplab said: This has changed. After 6pm to midnight you need to pay extra 25% surcharge throughout the week.

    It isn't the surcharge that's the problem, it's the traffic. Assuming @cosmicgate will be staying close to the city (since that's where most hotels are), during those times the traffic can be horrible and getting stuck in a jam isn't just wallet bleeding, but he's wasting precious time as a tourist as well. :)

  • @biplab said: I'd recommend you to travel in Bus / MRT. That will save you a lot. Buy a Ez-Link card and it will help you save even more.

    cab would be better. because he is new ...

    it might be cheap but it would be confusing for them

  • @Kenshin said: It isn't the surcharge that's the problem, it's the traffic. Assuming @cosmicgate will be staying close to the city (since that's where most hotels are), during those times the traffic can be horrible and getting stuck in a jam isn't just wallet bleeding, but he's wasting precious time as a tourist as well.

    I also wanted to alert him. In case he travels in cab after 7.30pm he'll surely get chopped. :)

    @Randy said: cab would be better. because he is new ...

    it might be cheap but it would be confusing for them

    Agreed. :)

  • @biplab said: I also wanted to alert him. In case he travels in cab after 7.30pm he'll surely get chopped. :)

    yes. they will chop tourists

  • @Randy said: yes. they will chop tourists

    They can't, it's metered fare.

  • @Kenshin said: The LSI card is cacheless so that's probably why mdraid performed better.

    yeah that's the problem :( but still awesome :D

  • KenshinKenshin Member
    edited September 2012

    @Mon5t3r said: yeah that's the problem :( but still awesome :D

    Not good enough, I wanted to hit 2GB/sec (drives rated for 500MB/sec). Tested individual drives now, no difference on LSI or onboard SATA3, each drive can do about 400-420MB/sec so 1.5GB/sec looks like my limit for now unless I go pick up a few more SSDs.

  • @Kenshin said: Not good enough, I wanted to hit 2GB/sec (drives rated for 500MB/sec).

    Don't even think about it.. that would be a bottleneck somewhere, processor, or memory.. :P /jk

  • KenshinKenshin Member
    edited September 2012

    Improved performance with 2x SSD onboard, 2x SSD on LSI. Write at 1.6GB/sec (close to previous though), read at 2GB/sec. I think I need more SSDs, LOL.

    dd if=/dev/zero of=sb-io-test bs=1M count=10k conv=fdatasync
    10240+0 records in
    10240+0 records out
    10737418240 bytes (11 GB) copied, 6.87824 s, 1.6 GB/s
    
    echo 3 > /proc/sys/vm/drop_caches
    dd if=sb-io-test of=/dev/null bs=64k
    163840+0 records in
    163840+0 records out
    10737418240 bytes (11 GB) copied, 5.29551 s, 2.0 GB/s
    
    ioping -RD
    19456 requests completed in 3000.0 ms, 10163 iops, 39.7 mb/s
    min/avg/max/mdev = 0.0/0.1/1.7/0.0 ms
    
    ioping -RL /mnt
    3520 requests completed in 3000.7 ms, 1315 iops, 328.7 mb/s
    min/avg/max/mdev = 0.6/0.8/0.9/0.0 ms
    
    Thanked by 1serverbear
  • Just some updates, probably more relevant to providers than users.

    Did tests with flashcache at this point. Applied flashcache on a single drive, backed by 1x or 4x RAID0 SSDs. Performance on flashcache for read was fixed at about 340MB/sec regardless of 1x or 4x drives. Write performance however improved up to 800MB/sec on 4 drives, but even a single drive achieved a good 400MB/sec. However, considering these SSDs were easily capable of 500MB read/write each, there's a heavy drop in speed when using them for flashcache.

    IOPS was where the setup shined. ioping results on flashcache vs SSD(s) were pretty close within 5% margin. Single drive itself only pulled about 65 IOPS (Seagate 250GB, so that's about right). 4x SSD RAID0 did 10k IOPS as per my above reply, so flashcache's improvement on IOPS is solid, but in terms of transfer speeds it didn't reach the maximum potential of the 4x RAID0 SSD.

    Since the results were really odd, I did a further test. 1x SSD flashcache 3x SSD RAID0. Single SSD performance was 500MB/s write, 428MB/s read, 8k IOPS. 3x SSD RAID) was 1.5GB/sec read/write, 11k IOPS. With flashcache, the SSD performance became 857MB/sec write, 375MB/sec read, 9.9k IOPS. Increase in write speed, drop in read speed, minor increase in IOPS.

    The other thing I investigated was some members commenting about my server's IOPS results. IOPS reached easily past 10k, usually between 15-18k IOPS. However, these are only running 6x 1TB RAID10 drives, no SSD involved, not to mention the results beat an array of 4 SSDs RAID0. Only possibility at this point is the RAID controller's cache. How will the RAID controller perform if I throw in an SSD, I have no idea now but I'll probably try it out sometime soon when I dump one of these SSDs into a live server.

    Sadly the RAID controller I'm using for my SSD tests has no cache, so unless I get my hands on a LSI 9265 or Adaptec 6805 (not recommended by reviews), I probably can't come up with more substantial results. But as it is, RAID cache may be more important than flashcache at this point.

    Thanked by 2Amfy Mon5t3r
  • MelitaMelita Member, Host Rep
    edited September 2012

    I was thinking of some configuration:

    1. 4 Raid10 HDDs (general VPS config)
    2. 4 Raid10 SSDs (for SSD VPS config)
    3. 2 Raid1 HDDs (only mirroring) + 1 SSD caching

    Usually #2 is more expensive than #1, and you only can offer lower disk space with #2, which might turned off some customers. But you got better performance with #2, which might attract some other type of customers, depends on their needs.

    I was thinking if it's possible to achieve (and offering a VPS with) HDD-like disk space and SSD-like performance, but with around the same cost as #1, while still not losing its redundancy aspect. That's why I proposing #3 as an idea, although I don't know how well this will works (never tested it).

    Besides, if the SSD fails/broken when caching and you need to replace it, then would that means breaking the redundancy idea of 2 Raid1 HDDs? Or is there such a thing as 2 Raid1 HDD + 2 Raid1 SSD caching? Or we can just trust the idea that SSD will gives more consistent lifespan compared with HDD due to SSD doesn't has mechanical parts? Or you can turn-off and on caching on-the-fly without rebooting server?

    So in Kenshin case, to achieve a lower cost, maybe 4 Raid10 HDDs + 1 SSD caching might gives roughly same performance compared with your current 6 Raid 10 HDD?

    Sorry if my thoughts is wrong, because I haven't read any article / scientific test regarding this. But I have this kind of thought because if there's some VPS provider who can offer SSD-like performance VPS with high disk space, that would be nice!

  • I think quite a number of VPS providers are implementing 4x RAID10 + SSD caching already, but I don't think any of them have posted actual benchmarks so I'm experimenting on my end.

    In terms of costs, 4x RAID10 HDD + 1 SSD is pretty close to 6x RAID10. With the benefits of the SSD IOPS and RW speed, I'd say it's a good and cheap solution. SSD are much hardier than HDDs, so far out of all the SSDs I deployed for office use (all my office PC run on SSD for higher productivity) only 1 failed and it was due to firmware issue which Intel updated but we didn't patch till after the issue occured. That case was a total data loss though.

    SSD caching is pretty decent based on my tests, but nothing beats the raw power of pure SSD themselves. If I'm going to launch an SSD product, it'll likely be RAID0 with daily backups to HDD. The failure rate of SSDs is really low and since the total capacity is rather small (4x 240GB < 1TB), a daily rsync during off-peak hours would make the most sense since SSD reads are cheap and fast. I never found RAID10 on SSD worth it's costs. On HDD the biggest killer is sector failure due to media, aka bad sectors. If you use the same brand/type SSDs they have the same write-wear, which is the major killer in SSDs. On a RAID10 with the same write patterns, if they fail they'll fail at the same time together, quite pointless.

Sign In or Register to comment.