Re: LVM performance vs direct dm-thin

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 02. 02. 22 v 3:09 Demi Marie Obenour napsal(a):
On Sun, Jan 30, 2022 at 06:43:13PM +0100, Zdenek Kabelac wrote:
Dne 30. 01. 22 v 17:45 Demi Marie Obenour napsal(a):
On Sun, Jan 30, 2022 at 11:52:52AM +0100, Zdenek Kabelac wrote:
Dne 30. 01. 22 v 1:32 Demi Marie Obenour napsal(a):
On Sat, Jan 29, 2022 at 10:32:52PM +0100, Zdenek Kabelac wrote:
Dne 29. 01. 22 v 21:34 Demi Marie Obenour napsal(a):
My biased advice would be to stay with lvm2. There is lot of work, many
things are not well documented and getting everything running correctly will
take a lot of effort  (Docker in fact did not managed to do it well and was
incapable to provide any recoverability)

What did Docker do wrong?  Would it be possible for a future version of
lvm2 to be able to automatically recover from off-by-one thin pool
transaction IDs?

Ensuring all steps in state-machine are always correct is not exactly simple.
But since I've not heard about off-by-one problem for a long while - I believe we've managed to close all the holes and bugs in double-commit system
and metadata handling by thin-pool and lvm2.... (for recent lvm2 & kernel)

It's difficult - if you would be distributing lvm2 with exact kernel version
& udev & systemd with a single linux distro - it reduces huge set of
troubles...

Qubes OS comes close to this in practice.  systemd and udev versions are
known and fixed, and Qubes OS ships its own kernels.

Systemd/udev evolves - so fixed today doesn't really mean same version will be there tomorrow. And unfortunately systemd is known to introduce backward incompatible changes from time to time...

I'm not familiar with QubesOS - but in many cases in real-life world we
can't push to our users latest&greatest - so we need to live with bugs and
add workarounds...

Qubes OS is more than capable of shipping fixes for kernel bugs.  Is
that what you are referring to?
not going to starting discussing this topic ;)

Chain filesystem->block_layer->filesystem->block_layer is something you most
likely do not want to use for any well performing solution...
But it's ok for testing...

How much of this is due to the slow loop driver?  How much of it could
be mitigated if btrfs supported an equivalent of zvols?

Here you are missing the core of problem from kernel POV aka
how the memory allocation is working and what are the approximation in kernel with buffer handling and so on. So whoever is using 'loop' devices in production systems in the way described above has never really tested any corner case logic....

Regards

Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux