Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


What image format (QCOW2 vs RAW) and storage format (LVM block vs ext4 file)?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

What image format (QCOW2 vs RAW) and storage format (LVM block vs ext4 file)?

edited March 2020 in General

I am setting up a CentOS v7 KVM node. Looks like I will probably be using Virtualizor as the control panel. What is the the best all-around storage configuration?

Right now I am leaning towards QCOW2 on ext4 file system just saving it in a directory. Perhaps a bit of a performance hit but I will be using SSD RAID 1 so should not be too bad. The advantage is it's simple, supports thin provisioning, snapshots, and migration.

Perhaps a bit slower is the only downside I can see. I already have a node running RAW images on LVM block storage and it's a bit more involved for administration. We have never had a need to add disks to a physical server which where LVM could potentially have an advantage. Some things are a bit of a pain like backups. We don't use thin provisioning because I don't think Solus had that option at the time or maybe it didn't work well at that time. So backups are full size.

«1

Comments

  • rm_rm_ IPv6 Advocate, Veteran
    edited March 2020

    LosPollosHermanos said: QCOW2 on ext4 file system just saving it in a directory.

    I hope you are not planning to sell this.

    Just for toying around and testing files in a directory are fine, but for any kind of production use, LVM volume per VM image is a must; thin LV if you need to save space or to use snapshots a lot, or plain for better reliability and performance (SSDs are big enough to allow for that these days).

  • AlwaysSkintAlwaysSkint Member
    edited March 2020

    For me, it's raw;LVM;ext4 'cos I can cope with that and relatively uncomplicated. Don't like xfs 'cos it can't be shrunk and I keep ext2 for tmp/backup partitions. I'm willing to put up with the performance/reliability hits, if any.
    I've been through iterations of ext2,ext4,bfs,jfs etc. and well, life's too short. ;)

  • edited March 2020

    @rm_ said:

    LosPollosHermanos said: QCOW2 on ext4 file system just saving it in a directory.

    I hope you are not planning to sell this.

    Just for toying around and testing files in a directory are fine, but for any kind of production use, LVM volume per VM image is a must.

    Why?

    @rm_ said:
    thin LV if you need to save space or to use snapshots a lot, or plain for better reliability and performance (SSDs are big enough to allow for that these days).

    QCOW2 image file in a directory can do snapshots and thin provisioning. LVM adds another layer which definitely does not make it more reliable. Performance is a QCOW2 vs RAW thing, not ext4 vs LVM (which adds another layer on top of ext4). Adding an LVM layer actually reduces performance a tiny bit.

  • @AlwaysSkint said:

    I've been through iterations of ext2,ext4,bfs,jfs etc. and well, life's too short. ;)

    Can you elaborate? I am only interested in ext4. LVM runs on top of that if I choose to use that at all. I am not considering xfs, zfs, btrfs.

  • LosPollosHermanos said: I am only interested in ext4. LVM runs on top of that if I choose to use that at all.

    ext4 (and other filesystems) run(s) on top of LVM. For me, LVM gives some flexibility in partition sizes and encryption when so inclined. It does add a little more complexity over old skool partitioning however and I'd never span logical volumes over physical discs.
    No point in journaling temp files, nor local backups when there should be offline ones too. For me, ext4 was a logical progression from ext3 but I had exposure to AIX a couple of decades ago, when jfs was filesystem "flavour of the year". Then there's filesystems for databases, blah de blah. Only really applicable for specific use cases.

  • edited March 2020

    I searched and you are correct. ext4 is run on top not underneath.

    I am still wondering why I cannot just use ext4 on it's own and save VMs as QCOW2 image files in a directory. I do not need to span disks or any of the other things LVM gives you. QCOW2 can do thin provisioning and snapshots without the need for LVM.

    So the only justification I can think of for using LVM is if I use RAW instead of QCOW2.

  • perennateperennate Member, Host Rep

    LosPollosHermanos said: I am still wondering why I cannot just use ext4 on it's own and save VMs as QCOW2 image files in a directory. I do not need to span disks or any of the other things LVM gives you. QCOW2 can do thin provisioning and snapshots without the need for LVM.

    You can, the concern is with the performance versus LVM, but depending on your storage configuration you may find that this difference is negligible.

  • edited March 2020

    @perennate said:

    LosPollosHermanos said: I am still wondering why I cannot just use ext4 on it's own and save VMs as QCOW2 image files in a directory. I do not need to span disks or any of the other things LVM gives you. QCOW2 can do thin provisioning and snapshots without the need for LVM.

    You can, the concern is with the performance versus LVM, but depending on your storage configuration you may find that this difference is negligible.

    My storage is 2xSSD as SW RAID1. As far as I can tell, any discussion about performance is more about QCOW2 vs RAW than anything else. Not LVM block storage vs ext4 file storage. I know QCOW2 is going to be a little bit slower. Supposedly it is much better than it used to be and almost negligible difference now.

    So I guess what it comes down to is, should I use QCOW2 image file in a directory on ext4 file system or RAW on LVM volume group block storage. I should also point out that I will be hosting Linux virtual servers using ext4. Not Windows.

  • perennateperennate Member, Host Rep

    LosPollosHermanos said: My storage is 2xSSD as SW RAID1

    You have to test it yourself to see what offers the best performance. Or you can just test with qcow2 on ext4 filesystem and see if the performance is at least satisfactory.

  • edited March 2020

    @perennate said:

    LosPollosHermanos said: My storage is 2xSSD as SW RAID1

    You have to test it yourself to see what offers the best performance. Or you can just test with qcow2 on ext4 filesystem and see if the performance is at least satisfactory.

    I guess so. I was hoping to avoid that. It will take me quite a bit of time and effort to test it right. I would have thought there would be lots of info about this already. I found some people posting their results but most of it is several years old. QCOW2 is supposedly greatly improved since then and QEMU has improved as well.

  • perennateperennate Member, Host Rep
    edited March 2020

    LosPollosHermanos said: I guess so. I was hoping to avoid that. It will take me quite a bit of time and effort to test it right. I would have thought there would be lots of info about this already. I found some people posting their results but most of it is several years old. QCOW2 is supposedly greatly improved since then and QEMU has improved as well.

    I mean you can get results for similar storage config but really it may not end up being the same just due to some stupid random factor.

  • jlayjlay Member
    edited March 2020

    Latency is generally lower with VM disks backed by logical volumes instead of files. It's not traversing two filesystems, so it's generally quicker. I prefer this method when available, but it's not live migration friendly

  • RAW give better perfomance

  • @stufently said:
    RAW give better perfomance

    One would think so. But I don't see any recent posts from anyone proving it.

  • edited March 2020

    @jlay said:
    Latency is generally lower with VM disks backed by logical volumes instead of files. It's not traversing two filesystems, so it's generally quicker. I prefer this method when available, but it's not live migration friendly

    That would definitely be an advantage of LVM or any block storage if that is true. I can't imagine it would add that much latency to local storage though. I would probably need to load it up with lots of small simultaneous read/write operations to get a worst-case.

  • FalzoFalzo Member

    from my experience performance of qcow2 heavily depends on how you deploy these images. no clue how virtualizor does that, but I recommend looking into options like cluster_size and preallocation - can do some magic ;-)
    at least with qcow2 and the scsi-virtio drivers fstrim and hole-punching on sparse images works a lot better than it does with raw.

    Thanked by 1vimalware
  • jfracjfrac Member, Host Rep

    I'm using LVM, much better when you are using iSCSI SANs, and multiple servers can connect to the same VGs at the same time, you would have to use NFS to do that with qcow2.

  • edited March 2020

    @jfrac said:
    I'm using LVM, much better when you are using iSCSI SANs, and multiple servers can connect to the same VGs at the same time, you would have to use NFS to do that with qcow2.

    For what I do it makes more sense to have dedicated servers with just local storage scattered around in multiple datacenters rather than concentrating them in a few with SANs.

  • LeviLevi Member

    Do not use zfs on serious production with milions of files. Always use qcow images vs raw. More performance, flexibility. Do not entrust any words on this forum regarding your own setup. Test, test, test.

  • @LTniger said:
    Do not use zfs on serious production with milions of files. Always use qcow images vs raw. More performance, flexibility. Do not entrust any words on this forum regarding your own setup. Test, test, test.

    Thanks for the info. I am not considering zfs but I am definitely considering qcow2.

  • jlayjlay Member

    @LosPollosHermanos said:

    @jlay said:
    Latency is generally lower with VM disks backed by logical volumes instead of files. It's not traversing two filesystems, so it's generally quicker. I prefer this method when available, but it's not live migration friendly

    That would definitely be an advantage of LVM or any block storage if that is true. I can't imagine it would add that much latency to local storage though. I would probably need to load it up with lots of small simultaneous read/write operations to get a worst-case.

    It's pretty noticeable in the variance of disk latency. With a Windows 10 gaming VM I'd see from like 10 - 2000ms latency on the disk with a file on a filesystem. Where with a logical volume, it's consistently below 100ms (if not lower)

  • jfracjfrac Member, Host Rep

    @LTniger said:
    Do not use zfs on serious production with milions of files. Always use qcow images vs raw. More performance, flexibility. Do not entrust any words on this forum regarding your own setup. Test, test, test.

    I'd like to hear about this crash with millions of files. Was it linux zfs or freebsd zfs?

    Thanked by 1darkimmortal
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    jfrac said: I'd like to hear about this crash with millions of files. Was it linux zfs or freebsd zfs?

    It's more an ARC issue. We used to use ZFS for our OpenVZ backups and we'd be moving around 50 - 100 million files or so. We used rsync to take backups/etc and the metadata work would completely murder the node. It just..failed.

    Francisco

  • LeviLevi Member
    edited March 2020

    Yea, and if you fill your storage 95%+ you will face massive performance impact. And if one disk in a pool will need to be resilvered... I still dream those nightmares.

    It's linux zfs. Didn't try the bsd one.

  • edited March 2020

    I have done some testing of QCOW2 vs RAW LVM (Apparently you cannot do QCOW2 on LVM with Virtualizor, only file storage) and I don't see much difference. I am using dd, fio, and ioping for testing and both types of storage have approximately the same numbers for latency, IOPS, throughput, load etc using Linux virtual servers (I don't do windows virtual servers). So I still cannot find a justification for LVM.

    I still need to do more testing but QCOW2 image files in a local storage ext4 directory are still looking viable to me for a production server.

  • @LTniger said:
    Yea, and if you fill your storage 95%+ you will face massive performance impact.

    For workloads with millions of small files or high access databases, 80% max space utilization is the better rule-of-thumb. (ZoL)

    90% is fine for large Files (archives, tarballs, iso)

  • perennateperennate Member, Host Rep

    LosPollosHermanos said: Apparently you cannot do QCOW2 on LVM with Virtualizor, only file storage

    You can have VM configured with LVM partitions inside a qcow2 file, I don't think qcow2 inside LVM really makes sense. LVM supports copy-on-write snapshots and such which can be used in lieu of the qcow2 features.

  • edited March 2020

    Running LVM inside qcow2 is something else entirely to what I am testing. I thought I could run qcow2 images directly on LVM partitions but turns out Virtualizor doesn't have that ability so one less thing I need to test.

    I tried loading up the KVM test node with several virtual servers running at the same time.
    Qcow2 in a local ext4 storage directory still ran like a champ. So it is still looking good for production to me.

    If anyone is under the impression that qcow2 is still inferior to LVM for local storage I suggest they revisit that assumption. I cannot even see minor differences in latency and speed compared to LVM. That is with writeback cache enabled which is what I use by default. With no cache, LVM is a little bit higher IOPS and throughput compared to qcow2 with no cache but not enough to make it more compelling. Latency is higher in both with no cache but still about the same in both.

  • AlwaysSkintAlwaysSkint Member
    edited March 2020

    LosPollosHermanos said: writeback cache enabled

    Too risky! Old fart that I am, I only use write-through on VMs

  • edited March 2020

    @AlwaysSkint said:

    LosPollosHermanos said: writeback cache enabled

    Too risky! Old fart that I am, I only use write-through on VMs

    Not really. It sort of depends what you are doing. I have been using writeback for probably close to 10 years now and never had a problem. If you are worried about an image becoming unbootable I think the likelihood of that is 0. Servers only write boot info when you update a kernel, at least with linux. Even if by some chance you had a power loss at that time you will still have previous kernels you can boot from.

Sign In or Register to comment.