On Mon, Feb 11, 2019 at 1:14 PM Nicolas Mailhot <nicolas.mailhot@xxxxxxxxxxx> wrote: > > That was during dnf update to the result of the latest rawhide mass > rebuild, on two UEFI systems, one initially installed in september 2015, > the other in january 2019, with whatever was most current then and then > switched to rawhide and continually updated via dnf since. Weird, this feature landed in Rawhide a long time ago, September or October last year? Why would the mass rebuild trigger the problem you're seeing? > > Both systems use lvm, one with md raid below lvm, the other without. > Both have the separate /boot/efi vfat mount, one with a separate ext4 > /boot below it > > > > > The symlink business is confusing. I think that's for grubby's > > benefit. It is a self describing method rather than hardwiring it in. > > But I don't really like that the real grubenv ends up being in > > /boot/efi/EFI/fedora/grubenv even when on BIOS systems where that path > > should even exist but... ya not a hill I want to die on today I think. > > There's no "real" grubenv, when the symlink gets broken you see that > part of the tools write in one file location, and the others in the > other one. IIRC boot_succes is a pathological case, the thing that sets > it to 0 writes in one grubenv location, and the thing that sets it to 1 > uses the other one. The thing that sets it to 0 is the pre-boot GRUB code, the bootloader/manager itself, it happens has it reads the grub.cfg. In your UEFI cases, this should always only ever point to the grubenv in the same location as the grub EFI binary. I don't know offhand if pre-boot GRUB follows symlinks... and therefore if BIOS GRUB knows to write to the grubenv in /boot/efi/EFI/fedora/grubenv - but I don't see how it can overwrite the a symlink anyway, it should just fail. But I'm not sure what such a fail looks like. There are caveats with the bootloader writing to grubenv on md raid, LVM, XFS and Btrfs; in your case that doesn't sound true except the possibility the raid1 system: is the EFI system partition on raid1, or is it a plain partition as is the usual case with a single drive installation? The thing that sets it to 1 is a systemd unit, which is on a 2 minute timer (I think starting from when gdm launches). This unit is going through the file system, so it's always a legitimate write through all the various storage stack layers, and is definitely following the symlink. -- Chris Murphy _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx