On Tue, Oct 4, 2016 at 10:09 AM, Stephen Gallagher <sgallagh@xxxxxxxxxx> wrote: > On 10/04/2016 12:06 PM, Andrew Lutomirski wrote: >> >> On Oct 4, 2016 8:52 AM, "Adam Williamson" <adamwill@xxxxxxxxxxxxxxxxx >> <mailto:adamwill@xxxxxxxxxxxxxxxxx>> wrote: >>> >>> Recently several reports of people getting 'duplicated packages' and >>> 'kernel updates not working' have come through to us in QA from Fedora >>> 24 users. I managed to get one reporter to explain more specifically >>> what happened, and it sounds a lot like what's happening is that >>> something in the 'dnf update' process can cause a GNOME or X crash, >>> possibly depending on hardware or package set installed. When that >>> happens, the update process is killed and does not complete cleanly, >>> which is why you get 'duplicated packages' and other odd results. >> >> How hard would it be to make dnf do the rpm transaction inside a proper >> system-level service (transient or otherwise)? This would greatly increase >> robustness against desktop crashes, ssh connection loss, KillUserProcs, and >> other damaging goofs. > > That seems like a waste of effort, considering we have the offline updates > process which just boots into a special, minimalist environment with almost > nothing but the updater running. It's not really workable without an atomic and out of tree update method, otherwise libraries are still yanked out from under running processes at some point. I've done this with nspawn (and chroot), taking snapshots of root, then applying the update to the snapshot, changing the bootloader to boot the updated snapshot. This is tedious but it's reliable in that pretty much anything bad can happen and it's only the fs tree being updated that can be broken. And only one reboot is needed. The long term solution is rpm-ostree based Workstation where the currently running fs tree isn't touched either. -- Chris Murphy _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx