Re: LVM performance vs direct dm-thin

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 30. 01. 22 v 1:32 Demi Marie Obenour napsal(a):
On Sat, Jan 29, 2022 at 10:32:52PM +0100, Zdenek Kabelac wrote:
Dne 29. 01. 22 v 21:34 Demi Marie Obenour napsal(a):
How much slower are operations on an LVM2 thin pool compared to manually
managing a dm-thin target via ioctls?  I am mostly concerned about
volume snapshot, creation, and destruction.  Data integrity is very
important, so taking shortcuts that risk data loss is out of the
question.  However, the application may have some additional information
that LVM2 does not have.  For instance, it may know that the volume that
it is snapshotting is not in use, or that a certain volume it is
creating will never be used after power-off.


So brave developers may always write their own management tools for their
constrained environment requirements that will by significantly faster in
terms of how many thins you could create per minute (btw you will need to
also consider dropping usage of udev on such system)

What kind of constraints are you referring to?  Is it possible and safe
to have udev running, but told to ignore the thins in question?

Lvm2 is oriented more towards managing set of different disks,
where user is adding/removing/replacing them. So it's more about recoverability, good support for manual repair (ascii metadata),
tracking history of changes,  backward compatibility, support
of conversion to different volume types (i.e. caching of thins, pvmove...)
Support for no/udev & no/systemd, clusters and nearly every linux distro available... So there is a lot - and this all adds quite complexity.

So once you scratch all this - and you say you only care about single disc then you are able to use more efficient metadata formats which you could even keep permanently in memory during the lifetime - this all adds great performance.

But it all depends how you could constrain your environment.

It's worth to mention there is lvm2 support for 'external' 'thin volume' creators - so lvm2 only maintains 'thin-pool' data & metadata LV - but thin volume creation, activation, deactivation of thins is left to external tool. This has been used by docker for a while - later on they switched to overlayFs I believe..


It's worth to mention - the more bullet-proof you will want to make your
project - the more closer to the extra processing made by lvm2 you will get.

Why is this?  How does lvm2 compare to stratis, for example?

Stratis is yet another volume manager written in Rust combined with XFS for easier user experience. That's all I'd probably say about it...

However before you will step into these waters - you should probably
evaluate whether thin-pool actually meet your needs if you have that high
expectation for number of supported volumes - so you will not end up with
hyper fast snapshot creation while the actual usage then is not meeting your
needs...

What needs are you thinking of specifically?  Qubes OS needs block
devices, so filesystem-backed storage would require the use of loop
devices unless I use ZFS zvols.  Do you have any specific
recommendations?

As long as you live in the world without crashes, buggy kernels, apps and failing hard drives everything looks very simple.
And every development costs quite some time & money.

Since you mentioned ZFS - you might want focus on using 'ZFS-only' solution.
Combining ZFS or Btrfs with lvm2 is always going to be a painful way as those filesystems have their own volume management.

Regards

Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux