On Sun, Jan 30, 2022 at 12:18:32PM +0100, Zdenek Kabelac wrote: > Dne 30. 01. 22 v 2:20 Demi Marie Obenour napsal(a): > > On Sat, Jan 29, 2022 at 10:40:34PM +0100, Zdenek Kabelac wrote: > > > Dne 29. 01. 22 v 21:09 Demi Marie Obenour napsal(a): > > > > On Sat, Jan 29, 2022 at 08:42:21PM +0100, Zdenek Kabelac wrote: > > > > > Dne 29. 01. 22 v 19:52 Demi Marie Obenour napsal(a): > > > > > > Is it possible to configure LVM2 so that it runs thin_trim before it > > > > > > activates a thin pool? Qubes OS currently runs blkdiscard on every thin > > > > > > volume before deleting it, which is slow and unreliable. Would running > > > > > > thin_trim during system startup provide a better alternative? > > > > > > > > > > Hi > > > > > > > > > > > > > > > Nope there is currently no support from lvm2 side for this. > > > > > Feel free to open RFE. > > > > > > > > Done: https://bugzilla.redhat.com/show_bug.cgi?id=2048160 > > > > > > > > > > > > > > Thanks > > > > > > Although your use-case Thinpool on top of VDO is not really a good plan and > > > there is a good reason behind why lvm2 does not support this device stack > > > directly (aka thin-pool data LV as VDO LV). > > > I'd say you are stepping on very very thin ice... > > > > Thin pool on VDO is not my actual use-case. The actual reason for the > > ticket is slow discards of thin devices that are about to be deleted; > > Hi > > Discard of thins itself is AFAIC pretty fast - unless you have massively > sized thin devices with many GiB of metadata - obviously you cannot process > this amount of metadata in nanoseconds (and there are prepared kernel > patches to make it even faster) Would you be willing and able to share those patches? > What is the problem is the speed of discard of physical devices. > You could actually try to feel difference with: > lvchange --discards passdown|nopassdown thinpool In Qubes OS I believe we do need the discards to be passed down eventually, but I doubt it needs to be synchronous. Being able to run the equivalent of `fstrim -av` periodically would be amazing. I’m CC’ing Marek Marczykowski-Górecki (Qubes OS project lead) in case he has something to say. > Also it's very important to keep metadata on fast storage device (SSD/NVMe)! > Keeping metadata on same hdd spindle as data is always going to feel slow > (in fact it's quite pointless to talk about performance and use hdd...) That explains why I had such a horrible experience with my initial (split between NVMe and HDD) install. I would not be surprised if some or all of the metadata volume wound up on the spinning disk. > > you can find more details in the linked GitHub issue. That said, now I > > am curious why you state that dm-thin on top of dm-vdo (that is, > > userspace/filesystem/VM/etc ⇒ dm-thin data (*not* metadata) ⇒ dm-vdo ⇒ > > hardware/dm-crypt/etc) is a bad idea. It seems to be a decent way to > > Out-of-space recoveries are ATM much harder then what we want. Okay, thanks! Will this be fixed in a future version? > So as long as user can maintain free space of your VDO and thin-pool it's > ok. Once user runs out of space - recovery is pretty hard task (and there is > reason we have support...) Out of space is already a tricky issue in Qubes OS. I certainly would not want to make it worse. > > add support for efficient snapshots of data stored on a VDO volume, and > > to have multiple volumes on top of a single VDO volume. Furthermore, > > We hope we will add some direct 'snapshot' support to VDO so users will not > need to combine both technologies together. Does that include support for splitting a VDO volume into multiple, individually-snapshottable volumes, the way thin works? > Thin is more oriented towards extreme speed. > VDO is more about 'compression & deduplication' - so space efficiency. > > Combining both together is kind of harming their advantages. That makes sense. -- Sincerely, Demi Marie Obenour (she/her/hers) Invisible Things Lab
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/