Re: Potential enhancements to dm-thin v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 11, 2022 at 10:16:02AM +0200, Zdenek Kabelac wrote:
> Dne 11. 04. 22 v 0:03 Demi Marie Obenour napsal(a):
> > For quite a while, I have wanted to write a tool to manage thin volumes
> > that is not based on LVM.  The main thing holding me back is that the
> > current dm-thin interface is extremely error-prone.  The only per-thin
> > metadata stored by the kernel is a 24-bit thin ID, and userspace must
> > take great care to keep that ID in sync with its own metadata.  Failure
> > to do so results in data loss, data corruption, or even security
> > vulnerabilities.  Furthermore, having to suspend a thin volume before
> > one can take a snapshot of it creates a critical section during which
> > userspace must be very careful, as I/O or a crash can lead to deadlock.
> > I believe both of these problems can be solved without overly
> > complicating the kernel implementation.
> 
> 
> Hi
> 
> These things are coming with initial design of whole DM world - where there
> is a split of complexity between kernel & user-space. So projects like
> btrfs, ZFS, decided to go the other way and create a monolithic 'all-in-one'
> solution, where they avoid some problems related with communication between
> kernel & user-space - but at the price of having a pretty complicated and
> very hard to devel & debug  kernel code.
> 
> So let me explain one of the reasons, we have this logic with suspend is
> this basic principle:
> 
> write new lvm metadata ->  suspend (with all table preloads) ->  commit  new
> lvm2 metadata -> resume
> 
> with this we ensure the user space maintain the only valid 'view' of metadata.
> 
> Your proposal actually breaks this sequence and would move things to the
> state of  'guess at which states we are now'. (and IMHO presents much more
> risk than virtual problem with suspend from user-space - which is only a
> problem if you are using suspended device as 'swap' and 'rootfs' - so there
> are very easy ways how to orchestrate your LVs to avoid such problems).

The intent is less “guess what states we are now” and more “It looks
like dm-thin already has the data structures needed to store some
per-thin metadata, and that could make writing a simple userspace volume
manager FAR FAR easier”.  It appears to me that the only change needed
would be reserving some space (amount fixed at pool creation) after
‘struct disk_device_details’ for use by userspace, and providing a way
for userspace to enumerate the thin devices on a volume and to set and
retrieve that extra data.  Suspend isn’t actually that big of a problem,
since new Qubes OS 4.1 (and later) installs use one pool for the root
filesystem and a separate one for VMs.  As a userspace writer, the
scariest part of managing thin volumes is actually making sure I don’t
lose track of which thin ID corresponds to which volume name.  The
*only* metadata Qubes OS would need would be a per-thin name, size, thin
ID, and possibly UUID.  All of those could be put in that extra space.

> Basically you are essentially wanting to move whole management into kernel
> for some not so great speed gains (related to the rest of the running system
> (and you can certainly do that by writing your own kernel module to manage
> your ratehr unique software problem)

From a storage perspective, my problem is basically the same as Docker’s
devicemapper driver.  Unlike Docker, though, Qubes OS must work at the
block level; it can’t work at the filesystem level.  So overlayfs and
friends aren’t options.

> But IMHO creation and removal of thousands of devices in very short period
> of time rather suggest there is something sub-optimal in your original
> software design as I'm really having hard time imagining why would you need
> this ?

There very well could be (suggestions for improvement welcome).

> If you wish to operate lots of devices - keep them simply created and ready
> - and eventually blkdiscard them for next device reuse.

That would work for volatile volumes, but those are only about 1/3 of
the volumes in a Qubes OS system.  The other 2/3 are writable snapshots.
Also, Qubes OS has found blkdiscard on thins to be a performance
problem.  It used to lock up entire pools until Qubes OS moved to doing
the blkdiscard in chunks.

> I'm also unsure from where would arise any special need to instantiate  that
> many snapshots -  if there is some valid & logical purpose -   lvm2 can have
> extended user space API to create multiple snapshots at once maybe (so
> i.e.    create  10 snapshots   with      name-%d  of a single thinLV)

This would be amazing, and Qubes OS should be able to use it.  That
said, Qubes OS would prefer to be able to choose the name of each volume
separately.  Could there be a more general batching operation?  Just
supporting ‘lvm lvcreate’ and ‘lvm lvs’ would be great, but support for
‘lvm lvremove’, ‘lvm lvrename’, ‘lvm lvextend’, and ‘lvm lvchange
--activate=y’ as well would be even better.

> Not to mentioning operating that many thin volumes from a single thin-pool
> is also nothing close to high performance goal you try to reach...

Would you mind explaining?  My understanding, and the basis of
essentially all my feature requests in this area, was that virtually all
of the cost of LVM is the userspace metadata operations, udev syncing,
and device scanning.  I have been assuming that the kernel does not have
performance problems with large numbers of thin volumes.

Right now, my machine has 334 active thin volumes, split between one
pool on an NVMe drive and one on a spinning hard drive.  The pool on an
NVMe drive has 312 active thin volumes, of which I believe 64 are in use.
Are these numbers high enough to cause significant performance
penalties for dm-thin v1, and would they cause problems for dm-thin v2?
How much of a performance win can I expect from only activating the
subset of volumes I actually use?

Also, I believe a significant fraction of I/O is writes to previously
unallocated blocks.  I haven’t measured how much, though, since I am not
aware of any way to get that statistic, at least without kprobes or
similar.

The pool on a spinning hard drive has 22 thin volumes, of which I
believe only one is in use.  The HDD is mostly used for backups, so its
performance doesn’t matter that much.

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

Attachment: signature.asc
Description: PGP signature

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux