New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
How's this for some SSD I/O goodness
dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -rf iotest
13848+0 records in
13848+0 records out
907542528 bytes (908 MB) copied, 74.9301 s, 12.1 MB/s
Yum... been like this for hours submitted a support ticket response after an hour (SLA is 30 minutes) and nothing since it's been several hours now. Apparently when I purchased this (it was a deal) it was supposed to be on a 10GB SSD drive I am not sure though, can't even ls trying to be patient .... my client is not happy...
I am not complaining though the company I am with treats me well just wondering if anyone had insite as to why the speed would be so low? failing drive?
Comments
Busy / oversold node.
Bad Raid Array
Abuse
Server needs to be restarted to clear caches etc.
Bad Drive
Memory Malfunction
CPU overheated
Share the iops please.
Well funny that I restarted the my VPS before all this and could boot it back up which is why I sent the first support.
Problems happen after that.
I don't think it's oversold before this it was fine and dandy.
Thanks @Mun ill wait for the support ticket
You cannot defeat this. http://www.webhostingtalk.com/showthread.php?t=1257818
Trying to untar ioping ........ trying ............ @budingyun ouch!
Maybe software RAID SSDs?
Don't think ill be able to get those iops for you...
Digital Ocean?
@unused nope.
I won't reveal who it is, it's a one off thing, I've had the server for ages now, and they treat me well, I assume they won't answer the ticket because they are busy with the node.. which is fair.
Ioping results @budingyun
`Virtuzzo OR OpenVZ Virtualisation detected
ioping code.google.com/p/ioping/
ioping.sh 0.9.8
shell wrapper script by George Liu (eva2000)
http://vbtechsupport.com
Virtuzzo or OpenVZ Virtualisation detected
dd (sequential disk speed test)...
dd if=/dev/zero of=testfilex bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 115.209 s, 9.3 MB/s
starting ioping tests...
ioping disk I/O test (default 1MB working set)
disk I/O: /
--- / (simfs /vz/private/11297) ioping statistics ---
5 requests completed in 4482.5 ms, 1200 iops, 4.7 mb/s
min/avg/max/mdev = 0.2/0.8/2.7/0.9 ms
seek rate test (default 1MB working set)
seek rate: /
--- / (simfs /vz/private/11297) ioping statistics ---
7606 requests completed in 3000.0 ms, 2999 iops, 11.7 mb/s
min/avg/max/mdev = 0.0/0.3/52.8/0.8 ms
sequential test (default 1MB working set)
**********************************************
sequential: /
--- / (simfs /vz/private/11297) ioping statistics ---
1554 requests completed in 3002.4 ms, 544 iops, 135.9 mb/s
min/avg/max/mdev = 1.0/1.8/47.9/2.0 ms
sequential cached I/O: /
--- / (simfs /vz/private/11297) ioping statistics ---
21836 requests completed in 3000.8 ms, 22306 iops, 5576.5 mb/s
min/avg/max/mdev = 0.0/0.0/5.7/0.0 ms
`
@MannDude should be able to help you out, tagging him in here.
@RobertClarke don't be silly, we've experimented with SW raid in the past and speeds were not that much different..
@CVPS_Kevin I don't think I need help I have submitted a support ticket anyway, I am not worried, just curious on things that can make it happen, which I've had answers too and was requested iops which are above.
No need to tag someone especially since it may not be that provider ..
Ah, well I think the SSD VPS + "SLA 30 minutes" kind of gave it away along with your past history, anyhow if it is them go ahead and PM @MannDude, he seems to be online now and he definitely tends to his own crop, he should be able to get everything straightened out for you. Best of luck to you.
@CVPS_Kevin
Never assume :P, but thanks!
Only trying to help! Definitely sounds like something the provider needs to address from their end though. Keep us updated on the situation.
Well it all seems to be fixed now, the I/O is up to 295mb/s for 1.1GB took 3.6 seconds.
I haven't heard anything from support in regards to it being fix I did receive this a few hours a go
"We are checking with our upstream provider and update you further. Please hold on."
...
Wait, he was this guy with Eve? :-)
No he was the guy with failed VPS companies.... and a VPS white label reseller...
"— that was when Jacob acquired HostLatch.net from Adam (of VPSLatch.net currently)"
I am confused by your posts I don't follow I am kinda new here...
Too much love?
Mines perform at >400MB/s :P
They sure are if you put any kind of sustained load on them. Worse than HDDs.
No problems here. I cannot think of any inherent reason why it would be a problem, outside of some unique circumstance.
Throw 4 Samsung 830s together with software RAID, load them up with VPS users, and get back to me in a couple months
@Nick_A is correct, software RAID with SSDs is just so temperamental, especially with VPSs. Avoid.
Again, I don't doubt specific circumstances can be problematic, but I have a few systems doing very heavy IO (not VPS) almost 24/7 and we have had no problems.
Sure, avoid if you don't want to figure what the problem is -- which is completely valid, sometimes its not worth it.
I don't believe this was us buddy. Last night when you tagged me I saw nothing in the existing ticket queue that appeared to be about any of our SSD nodes, nor do we have a 30 minute SLA, though we do make initial response to tickets as they come within 30 minutes most of the time, but it's not an SLA.
@ATHK, if this was us, send me a PM and I'll look into it more as even the updated speeds aren't anything super great as we have some standard non-SSD nodes that have comparable speeds.
All the best,
Sounds more like a configuration issue or a problem with that particular model of SSD.
The issue you may potentially face with SW raid and SSDs is if they are connected via SATA 2 and your SSD can saturate the port.