All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
New CacheCade enabled KVM Node Read Cache Result
Still fooling around with the trial key, but the regular key will arrive tomrrow along with the CPU's I was suppsoed to get, ended up with E5-2609's instead of 2620's, 2620's going in tomrrow.
Might want to get a towel.
Results of CacheCade read cache with 4x Intel 520 Gen2's 6G/s in Raid 0, accelerating a 4x 1TB 7200RPM SAS 6G/s. Test was a 2gb file being read and written out to /dev/null if you write it to a file again you're just going to get bottlenecked by the write speeds. Will enable RW Caching and redo the cache set to a raid 10 as well. LSI 9265-8i with cache vault.
32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB) copied, 0.290297 s, 7.4 GB/s
Comments
32768+0 records out
2147483648 bytes (2.1 GB) copied, 0.290297 s, 7.4 GB/s
Meh. Try echo 1 > /proc/sys/vm/drop_caches
then redo the test.
Or test with a file that is larger than the node's memory.
If we assume 500 MB/s per Intel 520 and that RAID 0 achieves perfect scaling, the max is 2 GB/s. Your likely hitting RAM.
I don't see any hard drive of any technology (well, except ramdrive but that is not hard drive) that can achieve that. Even ramdrive will be limited by SATA interface. SATA 3 caps at 6 Gb.
M
As far as I'm aware, most types of RAM don't benchmark at 7.4GB/s, and are probably more like half of that.
So he has very good ram because I will never believe that is a 2 gb cpu cache :P
M
M
>
Not really:
This is on Xeon E3-1230V2 with DDR3-1600 ECC dual channel ram. So quite normal result for reading from the RAM.
Edit:
If you repeat the test several times it gets even better:
Card Cache is 1gb, so ran with 2gb file, if you do the normal drive test of the HDD's you get the expected results of around 400Mb/s. However I'm not looking for raw speed, the idea is that the cache speeds up the most requested data on the node without having to spend gobs of money for less capacity per node if you just go with all SSD's. Also the cache drives will be rebuilt into raid 10 for redundancy purposes and I want to play with enabling write cache as well.
And then...... The power goes down.
I enabled the trial of CacheCade today and write was horrible, the regular array is way faster than a pair of raid0 SSD
That 7.4GB/s is not right, I can achieve that with the cache disabled, it is ram. I am glad you made me look for the trial key, and my take from it is this is not worth $225, flashcache FTW, now this 9266-4i card is bitchin!
I haven't yet got the chance to try SSD caching. What SSD's are you using for this?
Looks like @FRcorey used Intel 520's, I used Samsung 830's
@miTgiB what cache hit percent are you seeing (dmsetup status) ?
Here is a full node that I gathered up all the abusive VPS in Los Angeles and moved them to this node. These are not abusive users, just their VPS would gain the most benefit from the cache.
Wow! i can see why it helps. I guess i'll be rolling it too
It is such a small cost, and being creative and shoving the SSD internally and mounting with velcro, you can give the performance near pure SSD with the large allotment of space of a traditional LEB. Yes there are some tweaks to get a little more milage out of this setup, but keeping $225 in my pocket not buying CacheCade is well worth the small added time spent using flashcache. I don't see anything wrong with paying for CacheCade, but I think all the providers around here in for the long haul are smart enough to save that money.
The only downside is purely marketing right now, those providing pure SSD nodes will win, for now. Give it a few months and people will see the foolishness of paying a premium for small slivers of disk space.
Btw what made you choose writearound mode and not writethrough? (for writeback it is obvious why it is excluded - not safe).
While the above node was the first I tried this on, it is also the last 4 disk array I've built. and the larger arrays can write much faster than SSD, and with the load SSD is taking off the array for reads, it just felt (w)right.
what ratio of cache GB x TB of user space did you choose?
Reminds me of someone who is putting inside 3 3.5 hdds in a 1 u case with 2 caddys :P
M
I never put that much thought into it. I see great results from 64gb SSD that I use 50gb of for the cache and I rarely see 1tb of user data on a OpenVZ node. I expect the e5 nodes will see more space used, but as of now I am not seeing it.
64gb SSDs are hopelessly slow. Serious. Even 128gb have issues. Get a Plextor M5 Pro or a OCZ Vertex 4 if going for 128gb. The fastest right now and not too expensive. Yes, although they advertise blazing speeds - this is not true unless on larger size SSDs.
Otherwise for most workloads I'd go for Samsung 830 on 256gb and above. Intel 520 and anything Sandforce related, I would trust on a production server right now.
Also, if you are running Ivy Bridge and a 7 series motherboard, be sure to check if Intel release their latest drivers on Linux. It's out on Windows. Those have RAID0 Trim enabled for SSD and can improve performance quite a lot.
Throwing money at something is the easy way, but the Samsung 830's even the 64gb and 128gb are faster reads than a 12 disk raid10 array. And OCZ? How did you get a working one? I didn't know they existed.
We can quote this and that benchmark all day long, at the end of the day, it is real world use that matters to me, and not much else. Stability is always my #1 goal, always has been, always will be. Things break, I am realistic, but I would rather sacrifice performance for stability any day.
I use this motherboard. It uses the Intel C602 chipset, so no idea if that is 7 series or not, but the day I buy an Intel board is the day you should run fast and far from my service.
They are 1 of the biggest SSD makers. They probably have more retail sales than Samsung and Intel. Samsung just get enough OEM sales from Sony/Apple/etc. OCZ do make enterprise stuff too.
Patsburg is 7 series. Anything Ivy Bridge is. It's just you can run a Sandy Bridge motherboard with an Ivy Bridge CPU so I had to be explicit.
Stay away from anything Sandforce related then. It's a PAIN.
Check out Plextor M5 / M5 Pro You'll be surprised. Edit: the M3 series will do and will cut cost. Don't need Toggle Nand.
Oh cut the crap about Sandforce, please. Yes, they've had a lot of bugs. There has been enough time (years) so the bugs in the firmware should have been worked out by now. Besides intel uses their own firmware on the sandforce SSDs.
Should have, but after their recent TRIM saga over when Sandisk Extreme came out and their new 5 series firmware where TRIM was suddenly disabled for everyone I wouldn't trust them. They had enough fun with the BSODs already.
And they still suffer from most of the issues. There is a base firmware. Intel just makes mods on known issues Sandforce is too lazy to fix.
We're talking about production nodes that run lots of clients VPS here. I wouldn't want any corruption, slowdowns or reboots.
Their failure rates on recent SSD models is still astronomical. I will not be looking at OCZ for many years to come, even their enterprise line. I went with the Samsung for their low reported failure rate while being a tad cheaper than Intel.
+1. My choice for now too. Samsung 830 is faster than Intel 520 anyway. Real world usage that is.
No need to use velcro, if you have spare slots just get a PCIE SSD they're just as cheap and slightly faster as they have a better position on the motherboard compared to the raid card.
Not a bad idea, but I'd like a working one, and I only see OCZ, only other option is Intel for $4000
I bought an OCZ once and had to RMA it 4 times and still never had a working one so tossed it into the trash as the postage for all these RMA shipments was killing me.