New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
here is what is the problem
correct, Jingling kills io
@drserver: possible or issues from LSI or anything. Current BBU is died then cache not enabled.
Not having the BBU is going to disable your write cache.
I am not sure if LSI give any tools to show actual IO stats but if they do what does that show?
still finding issues
Just for the record. Make ramdisk where windows will swap and where ie will dump tmp and you will not see any of those.
@jazz1611 Why did you open a thread here if you were not going to let us know the resolution or keep the topic open for conversation? This is not your help desk!
If you are not going to open threads for actual conversation and only for your own help use, then don't open threads here, please. It a huge waste to open a thread to only close it without even discussing your resolution. Makes a bunch of useless threads that clutter the board this way.
Cheers!
i have do follow this: https://www.binarylane.com.au/support/articles/1000055889-how-to-benchmark-disk-i-o
and there is result of Random read/write performance
Random read performance
Jingling kills disk I/O buddy.
@wych i test disk io without VM run. check performance first.
and then when you run the vm's you get high iowait, looks like you have your answer
As @drserver advised I would try a RAMdisk.
Yep, no magic cure.
Jingling is specific crappy app which is doing lots of really small writes. Which leads disk to get crazy. Now if you ask me Jingling without raid 10 ssd is not an option.
You can sacrifice ram to enable windows ramdisk however best bet is ssd, as much as you can ssds.
Now you can see from first hand why most of providers don't like Jingling
>
Why that? Raid10 low performance more Raid5. All node just running one VM.
hm? Did you even read the output? The first result is clearly cached (3 HDDs CANNOT do more than 400MB/s), the second is real and a good result.