(cc: desktop@ list too) Not sure. Lennart is out until ~Nov 1 on paternity leave. And then his first priority is merging all the systemd-homed stuff and a bunch of fix ups. He says it will be production ready by then. One thing I got from Lennart that I don't have a complete assessment on, is to what degree he's replicated fscrypt (userspace tool from Google) functionality. He's hooking into fscrypt (kernel code) directly, and doesn't have any of the fancier key management that Google fscrypt has - i.e. Google fscrypt let's you change the user master passphrase, which is a wrapped key, separate from the secret key (let's call it a DEK, even though there is one DEK and then there are many derived DEK's for each file), so that it's not necessary to reencrypt everything. Whereas systemd-homed lacks this feature, so it requires 50% free space to reencrypt everything in case of user passphrase changes. I already told Lennart I personally think that's close to a deal killer, if I'm really understanding this limitation correctly. It's weird behavior. And such a burden for resources, space and time and likely would need to be done offline (user is not logged in and can't be using their home while the reencryption happens), that it would inhibit users from changing their passphrase when they probably should, which makes it anti-security, i.e. "I should change my login passphrase after that work trip; oh crap I have to reencrypt 250G?! No way. Guess I won't change the passphrase." I think it's unworkable, and I said so. Casually some people might use it. But I can't recommend that. Someone who can read the code needs to look at it and give an independent assessment: is it really production worthy, is the use case too narrow or just right, and if too narrow what kind of work would be needed to broaden it? Etc. I can't really do any of those things. But for example, to what degree will homed support FreeIPA/AD domains? Is that a small problem or big problem, is there some other way to handle those cases? How? I'm also concerned about performance implications of e.g. GNOME Boxes in the case where it creates qcow2 file, on top of user home which is on a file system backed by a loopback mounted encrypted file, which is on another file system. So that VM is basically pushing writes through three file systems. I've done quite a lot of brutal testing over years where I do bat guano crazy things like Btrfs on qcow2, on Btrfs home on encrypted loop file, on Btrfs, and I'll use the VM and crash the host and everything survives - that is, the file systems don't break. There is data loss but that's sorta expected for any file system. But a consequence of that arrangement is, Btrfs is shit when it comes to fsync. So if the GNOME Boxes default cache of writeback continues to be used, which is reasonable, Btrfs performance gets bad with fsync heavy workloads. Otherwise it performs the same, and occasionally better, than other file systems. So yeah - who wants to help dig into the dirty work and actually make a recommendation? We don't have enough R&D done yet probably to decide on this but I'm gonna guess it makes a lot more sense to try to use systemd-homed and convince Lennart, as early adopters, we want it to be successful and in order to be successful he's gonna have to let it grow into something that'll actually do what we want and need. But for that to be convincing, we have to know what we want and need. Chris On Tue, Oct 22, 2019 at 1:39 PM Michael Catanzaro <mcatanzaro@xxxxxxxxx> wrote: > > > What should be do to unblock this? Would another meeting be helpful, or > do we need to do further investigation/research first, or...? > > -- Chris Murphy _______________________________________________ desktop mailing list -- desktop@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to desktop-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/desktop@xxxxxxxxxxxxxxxxxxxxxxx