Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


In this Discussion

What Openvz storage backend(s) do you use as a provider?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

What Openvz storage backend(s) do you use as a provider?

Hello,

I'm looking for feedback on what storage backends OpenVZ providers are using.

1) Do you use simfs or ploop?

2) If you use simfs, do you ever plan on moving to ploop?

3) Do you use LVM, with or without thin provisioning?

4) Are you running OpenVZ 6 or 7, or a mixture of both?

Why am I asking?

Sometime ago I posted a thread on LET regarding my multi-hypervisor monitoring application, intended to be a replacement for nodewatch.

I have been hard at work rewriting everything from scratch so each hypervisor implementation would play nicely with each other, allowing for a single binary to monitor all available hypervisors. I left OpenVZ til last because I knew it would be the biggest pain to implement.

To be able to support all currently used storage backends and configurations I need to know whats widely used. Any host will know OpenVZ does not hand out the most useful of statistics, so I need to know specifically what I should focus my time on implementing to be as accurate as possible when it comes to disk usage, most importantly accurate IO monitoring.

Thanks in advance for your input, its highly appreciated.

Comments

  • cubedatacubedata Member, Patron Provider

    @r0t3n said:
    Hello,

    I'm looking for feedback on what storage backends OpenVZ providers are using.

    1) Do you use simfs or ploop?

    2) If you use simfs, do you ever plan on moving to ploop?

    3) Do you use LVM, with or without thin provisioning?

    4) Are you running OpenVZ 6 or 7, or a mixture of both?

    Why am I asking?

    Sometime ago I posted a thread on LET regarding my multi-hypervisor monitoring application, intended to be a replacement for nodewatch.

    I have been hard at work rewriting everything from scratch so each hypervisor implementation would play nicely with each other, allowing for a single binary to monitor all available hypervisors. I left OpenVZ til last because I knew it would be the biggest pain to implement.

    To be able to support all currently used storage backends and configurations I need to know whats widely used. Any host will know OpenVZ does not hand out the most useful of statistics, so I need to know specifically what I should focus my time on implementing to be as accurate as possible when it comes to disk usage, most importantly accurate IO monitoring.

    Thanks in advance for your input, its highly appreciated.

    we use whatever solusvm setup's lol(since mostly don't even have any of those options with solusvm lol ;) if you don't get the joke you should listen to anthonysmith's rants about solusvm lol)

Sign In or Register to comment.