Re: upgrading from f28 to f29 messed up my system...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/5/18 4:03 AM, François Patte wrote:
1- déc. 05 11:33:05 dipankar systemd[1]: systemd-modules-load.service:
Main process exited, code=exited, status=1/FAILURE
déc. 05 11:33:05 dipankar systemd[1]: systemd-modules-load.service:
Failed with result 'exit-code'.
déc. 05 11:33:05 dipankar systemd[1]: Failed to start Load Kernel Modules.
-- Subject: L'unité (unit) systemd-modules-load.service a échoué

This error occurs several times during the boot process.

You can ignore this.  I think I get these messages on all systems.

2- A lot of errors about lvm also occur:

  déc. 05 11:33:06 dipankar lvm[1178]:   3 logical volume(s) in volume
group "systeme" now active
déc. 05 11:33:06 dipankar lvm[1178]:   WARNING: Device mismatch detected
for debian/deb-racine which is accessing /dev/md127 instead of /dev/sda2.
déc. 05 11:33:06 dipankar lvm[1178]:   device-mapper: reload ioctl on
(253:0) failed: Périphérique ou ressource occupé<---- busy
déc. 05 11:33:06 dipankar lvm[1178]:   Failed to suspend debian/deb-racine.
déc. 05 11:33:06 dipankar kernel: device-mapper: table: 253:0: linear:
Device lookup failed
déc. 05 11:33:06 dipankar kernel: device-mapper: ioctl: error adding
target to table

This is where the problem is. Someone ran into a similar problem at work. I think it has to do with the RAID metadata type. lvm is detecting the volumes on the partition before the RAID is setup, so lvm tries to use the partition. Then it finds duplicates on the other RAID partition and doesn't like that.

Since all your lvm volumes are only on RAID volumes, you could try modifying the /etc/lvm/lvm.conf file. Find the global_filter option and add the following line:
global_filter = [ "r|sd|" ]

Does it work if you boot one of the F28 kernels from the boot menu? If not, you will need to use an F28 (or earlier) live boot to modify the file. I expect you will also need to regenerate the initramfs. I would suggest using the network install image in rescue mode as that makes it easier to chroot and work on the installed system. Be aware that if you do this, it will automatically cause an selinux relabel of the entire filesystem on next boot. You can avoid this by deleting the /.autorelabel file, but make sure you didn't modify any file labels.
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx



[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux