Hi. Please check my log excerpts and comments below. On Wed, Feb 17, 2021 at 01:03:29PM +0100, Christian Hesse wrote: > > But in general, I think this needs deeper analysis. Looking at > > https://bugs.archlinux.org/task/69611, the workaround appears to have > > been found simply by drawing an analogy to a previous similar case. > > I'd like to understand what happened on the arch system when the error > > occured, and why this simple ordering directive avoided it. > > As said I can not reproduce it myself... Oleksandr, can you give more details? > Possibly everything from journal regarding systemd-udevd.service (and > systemd-udevd.socket) and lvm2-pvscan@*.service could help. I'm not sure this issue is reproducible with any kind of LVM layout. What I have is thin-LVM-on-LUKS-on-LVM: ``` [~]> lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 238,5G 0 disk ├─nvme0n1p1 259:1 0 8M 0 part /boot/EFI └─nvme0n1p2 259:2 0 238,5G 0 part ├─base-boot 254:0 0 256M 0 lvm /boot └─base-sys 254:1 0 238,2G 0 lvm └─sys 254:2 0 238,2G 0 crypt ├─sys-swap 254:3 0 2G 0 lvm [SWAP] ├─sys-pool_tmeta 254:4 0 120M 0 lvm │ └─sys-pool-tpool 254:6 0 236G 0 lvm │ ├─sys-pool 254:7 0 236G 1 lvm │ ├─sys-root 254:8 0 235G 0 lvm / │ ├─sys-home 254:9 0 235G 0 lvm /home │ ├─sys-containers 254:10 0 235G 0 lvm /mnt/containers │ ├─sys-vms 254:11 0 235G 0 lvm /mnt/vms │ ├─sys-email 254:12 0 235G 0 lvm /mnt/email │ ├─sys-vm--01--rhel6--sda 254:13 0 8G 0 lvm │ ├─sys-vm--02--rhel7--sda 254:14 0 16G 0 lvm │ ├─sys-vm--03--rhel8--sda 254:15 0 8G 0 lvm │ ├─sys-vm--04--rhel6--32--sda 254:16 0 8G 0 lvm │ ├─sys-vm--05--rhel5--sda 254:17 0 8G 0 lvm │ ├─sys-vm--06--fedora--sda 254:18 0 8G 0 lvm │ └─sys-vm--02--rhel7--sdb 254:19 0 1G 0 lvm └─sys-pool_tdata 254:5 0 236G 0 lvm └─sys-pool-tpool 254:6 0 236G 0 lvm ├─sys-pool 254:7 0 236G 1 lvm ├─sys-root 254:8 0 235G 0 lvm / ├─sys-home 254:9 0 235G 0 lvm /home ├─sys-containers 254:10 0 235G 0 lvm /mnt/containers ├─sys-vms 254:11 0 235G 0 lvm /mnt/vms ├─sys-email 254:12 0 235G 0 lvm /mnt/email ├─sys-vm--01--rhel6--sda 254:13 0 8G 0 lvm ├─sys-vm--02--rhel7--sda 254:14 0 16G 0 lvm ├─sys-vm--03--rhel8--sda 254:15 0 8G 0 lvm ├─sys-vm--04--rhel6--32--sda 254:16 0 8G 0 lvm ├─sys-vm--05--rhel5--sda 254:17 0 8G 0 lvm ├─sys-vm--06--fedora--sda 254:18 0 8G 0 lvm └─sys-vm--02--rhel7--sdb 254:19 0 1G 0 lvm ``` (you may ask why? because full disk encryption is mandated by my employer, which is, surprise-surprise, Red Hat ☺; I'll Cc myself to @rh address as well just to be sure I do not miss anything in the future) Also, this doesn't happen if the amount of thin volumes is small (like, less than 5). So, in order to reproduce this locally you may need to throw quite of LVs into the system. The issue is reproducible with this layout as well: ``` [~]> lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 465,8G 0 disk ├─nvme0n1p1 259:1 0 8M 0 part /boot/EFI └─nvme0n1p2 259:2 0 465G 0 part └─md0 9:0 0 464,9G 0 raid10 ├─base-boot 253:0 0 384M 0 lvm /boot └─base-sys 253:1 0 464,5G 0 lvm └─sys 253:2 0 464,5G 0 crypt ├─sys-swap 253:3 0 2G 0 lvm [SWAP] ├─sys-pool_tmeta 253:4 0 116M 0 lvm │ └─sys-pool-tpool 253:6 0 462,3G 0 lvm │ ├─sys-pool 253:7 0 462,3G 1 lvm │ ├─sys-root 253:8 0 462G 0 lvm / │ ├─sys-home 253:9 0 462G 0 lvm /home │ ├─sys-vms 253:10 0 462G 0 lvm /mnt/vms │ ├─sys-vm--01--archlinux--sda 253:11 0 8G 0 lvm │ ├─sys-vm--02--windows--sda 253:12 0 50G 0 lvm │ └─sys-mail 253:13 0 462G 0 lvm /mnt/mail └─sys-pool_tdata 253:5 0 462,3G 0 lvm └─sys-pool-tpool 253:6 0 462,3G 0 lvm ├─sys-pool 253:7 0 462,3G 1 lvm ├─sys-root 253:8 0 462G 0 lvm / ├─sys-home 253:9 0 462G 0 lvm /home ├─sys-vms 253:10 0 462G 0 lvm /mnt/vms ├─sys-vm--01--archlinux--sda 253:11 0 8G 0 lvm ├─sys-vm--02--windows--sda 253:12 0 50G 0 lvm └─sys-mail 253:13 0 462G 0 lvm /mnt/mail nvme1n1 259:3 0 465,8G 0 disk ├─nvme1n1p1 259:4 0 8M 0 part └─nvme1n1p2 259:5 0 465G 0 part └─md0 9:0 0 464,9G 0 raid10 ├─base-boot 253:0 0 384M 0 lvm /boot └─base-sys 253:1 0 464,5G 0 lvm └─sys 253:2 0 464,5G 0 crypt ├─sys-swap 253:3 0 2G 0 lvm [SWAP] ├─sys-pool_tmeta 253:4 0 116M 0 lvm │ └─sys-pool-tpool 253:6 0 462,3G 0 lvm │ ├─sys-pool 253:7 0 462,3G 1 lvm │ ├─sys-root 253:8 0 462G 0 lvm / │ ├─sys-home 253:9 0 462G 0 lvm /home │ ├─sys-vms 253:10 0 462G 0 lvm /mnt/vms │ ├─sys-vm--01--archlinux--sda 253:11 0 8G 0 lvm │ ├─sys-vm--02--windows--sda 253:12 0 50G 0 lvm │ └─sys-mail 253:13 0 462G 0 lvm /mnt/mail └─sys-pool_tdata 253:5 0 462,3G 0 lvm └─sys-pool-tpool 253:6 0 462,3G 0 lvm ├─sys-pool 253:7 0 462,3G 1 lvm ├─sys-root 253:8 0 462G 0 lvm / ├─sys-home 253:9 0 462G 0 lvm /home ├─sys-vms 253:10 0 462G 0 lvm /mnt/vms ├─sys-vm--01--archlinux--sda 253:11 0 8G 0 lvm ├─sys-vm--02--windows--sda 253:12 0 50G 0 lvm └─sys-mail 253:13 0 462G 0 lvm /mnt/mail ``` (this is the one from the Arch bugreport). With regard to the journal, here it is (from the same machine in the Arch bugreport; matches the second layout above): ``` -- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:25:57 CET. -- Feb 10 17:24:26 archlinux systemd[1]: Stopping Rule-based Manager for Device Events and Files... Feb 10 17:24:26 archlinux systemd[1]: systemd-udevd.service: Succeeded. Feb 10 17:24:26 archlinux systemd[1]: Stopped Rule-based Manager for Device Events and Files. Feb 10 17:24:27 spock systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 10 17:24:27 spock mtp-probe[897]: checking bus 6, device 5: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.4" Feb 10 17:24:27 spock mtp-probe[896]: checking bus 6, device 2: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-1" Feb 10 17:24:27 spock mtp-probe[899]: checking bus 6, device 4: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.3" Feb 10 17:24:27 spock mtp-probe[898]: checking bus 6, device 6: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.5" Feb 10 17:24:27 spock mtp-probe[897]: bus: 6, device: 5 was not an MTP device Feb 10 17:24:27 spock mtp-probe[898]: bus: 6, device: 6 was not an MTP device Feb 10 17:24:27 spock mtp-probe[899]: bus: 6, device: 4 was not an MTP device Feb 10 17:24:27 spock mtp-probe[896]: bus: 6, device: 2 was not an MTP device Feb 10 17:24:27 spock systemd-udevd[881]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 10 17:24:27 spock systemd-udevd[887]: Using default interface naming scheme 'v247'. Feb 10 17:24:27 spock systemd-udevd[887]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 10 17:24:27 spock systemd-udevd[879]: Using default interface naming scheme 'v247'. Feb 10 17:24:27 spock systemd-udevd[879]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 10 17:24:27 spock systemd-udevd[887]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 10 17:24:27 spock systemd-udevd[887]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 10 17:24:27 spock systemd-udevd[826]: Using default interface naming scheme 'v247'. Feb 10 17:24:27 spock systemd-udevd[826]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 10 17:24:27 spock systemd-udevd[826]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. [~]> LC_TIME=C sudo journalctl -b -10 -u systemd-udevd.socket -- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:27:58 CET. -- -- No entries -- [~]> LC_TIME=C sudo journalctl -b -10 -u lvm2-pvscan@\* -- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:28:05 CET. -- Feb 10 17:24:17 archlinux systemd[1]: Starting LVM event activation on device 9:0... Feb 10 17:24:17 archlinux lvm[463]: pvscan[463] PV /dev/md0 online, VG base is complete. Feb 10 17:24:17 archlinux lvm[463]: pvscan[463] VG base run autoactivation. Feb 10 17:24:17 archlinux lvm[463]: 2 logical volume(s) in volume group "base" now active Feb 10 17:24:17 archlinux systemd[1]: Finished LVM event activation on device 9:0. Feb 10 17:24:26 archlinux systemd[1]: Starting LVM event activation on device 253:2... Feb 10 17:24:26 archlinux lvm[643]: pvscan[643] PV /dev/mapper/sys online, VG sys is complete. Feb 10 17:24:26 archlinux lvm[643]: pvscan[643] VG sys run autoactivation. Feb 10 17:24:26 archlinux lvm[643]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 10 17:24:26 archlinux lvm[643]: WARNING: Failed to monitor sys/pool. Feb 10 17:24:26 archlinux lvm[643]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 10 17:24:26 archlinux lvm[643]: WARNING: Failed to monitor sys/pool. Feb 10 17:24:26 archlinux lvm[643]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 10 17:24:26 archlinux lvm[643]: WARNING: Failed to monitor sys/pool. Feb 10 17:24:26 archlinux lvm[643]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 10 17:24:26 archlinux lvm[643]: WARNING: Failed to monitor sys/pool. Feb 10 17:24:26 archlinux lvm[643]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 10 17:24:26 archlinux lvm[643]: WARNING: Failed to monitor sys/pool. Feb 10 17:24:26 archlinux lvm[643]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 10 17:24:26 archlinux lvm[643]: WARNING: Failed to monitor sys/pool. Feb 10 17:24:26 archlinux systemd[1]: Stopping LVM event activation on device 9:0... Feb 10 17:24:26 archlinux lvm[720]: pvscan[720] PV /dev/md0 online. Feb 10 17:24:26 archlinux lvm[643]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 10 17:24:26 archlinux lvm[643]: WARNING: Failed to monitor sys/pool. Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: State 'stop-sigterm' timed out. Killing. Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Killing process 643 (lvm) with signal SIGKILL. Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Main process exited, code=killed, status=9/KILL Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Failed with result 'timeout'. Feb 10 17:24:56 spock systemd[1]: Stopped LVM event activation on device 253:2. [~]> LC_TIME=C sudo journalctl -b -10 --grep pvscan -- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:31:27 CET. -- Feb 10 17:24:17 archlinux systemd[1]: Created slice system-lvm2\x2dpvscan.slice. Feb 10 17:24:17 archlinux lvm[463]: pvscan[463] PV /dev/md0 online, VG base is complete. Feb 10 17:24:17 archlinux lvm[463]: pvscan[463] VG base run autoactivation. Feb 10 17:24:17 archlinux audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=success' Feb 10 17:24:17 archlinux kernel: audit: type=1130 audit(1612974257.986:6): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=success' Feb 10 17:24:26 archlinux lvm[643]: pvscan[643] PV /dev/mapper/sys online, VG sys is complete. Feb 10 17:24:26 archlinux lvm[643]: pvscan[643] VG sys run autoactivation. Feb 10 17:24:26 archlinux lvm[720]: pvscan[720] PV /dev/md0 online. Feb 10 17:24:27 spock systemd[1]: lvm2-pvscan@9:0.service: Control process exited, code=killed, status=15/TERM Feb 10 17:24:27 spock systemd[1]: lvm2-pvscan@9:0.service: Failed with result 'signal'. Feb 10 17:24:26 spock audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 10 17:24:27 spock systemd[1]: Requested transaction contradicts existing jobs: Transaction for lvm2-pvscan@253:2.service/start is destructive (lvm2-pvscan@253:2.service has 'stop' job queued, but 'start' is included in transaction). Feb 10 17:24:27 spock systemd[1]: sys-devices-virtual-block-dm\x2d2.device: Failed to enqueue SYSTEMD_WANTS= job, ignoring: Transaction for lvm2-pvscan@253:2.service/start is destructive (lvm2-pvscan@253:2.service has 'stop' job queued, but 'start' is included in transaction). Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: State 'stop-sigterm' timed out. Killing. Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Killing process 643 (lvm) with signal SIGKILL. Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Main process exited, code=killed, status=9/KILL Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Failed with result 'timeout'. Feb 10 17:24:56 spock audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@253:2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 10 17:24:56 spock kernel: audit: type=1130 audit(1612974296.950:93): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@253:2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 10 17:29:57 spock systemd[1]: Removed slice system-lvm2\x2dpvscan.slice. ``` FWIW, here's what it looks like now, after adding the "After=" fix: ``` [~]> LC_TIME=C sudo journalctl -b -u systemd-udevd.service -- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:28:47 CET. -- Feb 16 22:13:37 archlinux systemd[1]: Stopping Rule-based Manager for Device Events and Files... Feb 16 22:13:37 archlinux systemd[1]: systemd-udevd.service: Succeeded. Feb 16 22:13:37 archlinux systemd[1]: Stopped Rule-based Manager for Device Events and Files. Feb 16 22:13:38 spock systemd[1]: Starting Rule-based Manager for Device Events and Files... Feb 16 22:13:38 spock systemd[1]: Started Rule-based Manager for Device Events and Files. Feb 16 22:13:38 spock mtp-probe[895]: checking bus 6, device 2: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-1" Feb 16 22:13:38 spock mtp-probe[897]: checking bus 6, device 6: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.5" Feb 16 22:13:38 spock mtp-probe[898]: checking bus 6, device 5: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.4" Feb 16 22:13:38 spock mtp-probe[896]: checking bus 6, device 4: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.3" Feb 16 22:13:38 spock mtp-probe[897]: bus: 6, device: 6 was not an MTP device Feb 16 22:13:38 spock mtp-probe[898]: bus: 6, device: 5 was not an MTP device Feb 16 22:13:38 spock mtp-probe[896]: bus: 6, device: 4 was not an MTP device Feb 16 22:13:38 spock mtp-probe[895]: bus: 6, device: 2 was not an MTP device Feb 16 22:13:38 spock systemd-udevd[825]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 16 22:13:38 spock systemd-udevd[866]: Using default interface naming scheme 'v247'. Feb 16 22:13:38 spock systemd-udevd[860]: Using default interface naming scheme 'v247'. Feb 16 22:13:38 spock systemd-udevd[866]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 16 22:13:38 spock systemd-udevd[860]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 16 22:13:38 spock systemd-udevd[833]: Using default interface naming scheme 'v247'. Feb 16 22:13:38 spock systemd-udevd[833]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 16 22:13:38 spock systemd-udevd[833]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 16 22:13:39 spock systemd-udevd[892]: Using default interface naming scheme 'v247'. Feb 16 22:13:39 spock systemd-udevd[892]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 16 22:13:39 spock systemd-udevd[842]: Using default interface naming scheme 'v247'. Feb 16 22:13:39 spock systemd-udevd[842]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. [~]> LC_TIME=C sudo journalctl -b -u systemd-udevd.socket -- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:29:04 CET. -- -- No entries -- [~]> LC_TIME=C sudo journalctl -b -u lvm2-pvscan@\* -- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:29:15 CET. -- Feb 16 22:13:27 archlinux systemd[1]: Starting LVM event activation on device 9:0... Feb 16 22:13:27 archlinux lvm[495]: pvscan[495] PV /dev/md0 online, VG base is complete. Feb 16 22:13:27 archlinux lvm[495]: pvscan[495] VG base run autoactivation. Feb 16 22:13:27 archlinux lvm[495]: 2 logical volume(s) in volume group "base" now active Feb 16 22:13:28 archlinux systemd[1]: Finished LVM event activation on device 9:0. Feb 16 22:13:37 archlinux systemd[1]: Starting LVM event activation on device 253:2... Feb 16 22:13:37 archlinux lvm[665]: pvscan[665] PV /dev/mapper/sys online, VG sys is complete. Feb 16 22:13:37 archlinux lvm[665]: pvscan[665] VG sys run autoactivation. Feb 16 22:13:37 archlinux lvm[665]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 16 22:13:37 archlinux lvm[665]: WARNING: Failed to monitor sys/pool. Feb 16 22:13:37 archlinux lvm[665]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 16 22:13:37 archlinux lvm[665]: WARNING: Failed to monitor sys/pool. Feb 16 22:13:37 archlinux lvm[665]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 16 22:13:37 archlinux lvm[665]: WARNING: Failed to monitor sys/pool. Feb 16 22:13:37 archlinux lvm[665]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 16 22:13:37 archlinux lvm[665]: WARNING: Failed to monitor sys/pool. Feb 16 22:13:37 archlinux lvm[665]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 16 22:13:37 archlinux lvm[665]: WARNING: Failed to monitor sys/pool. Feb 16 22:13:37 archlinux lvm[665]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 16 22:13:37 archlinux lvm[665]: WARNING: Failed to monitor sys/pool. Feb 16 22:13:37 archlinux systemd[1]: Stopping LVM event activation on device 9:0... Feb 16 22:13:37 archlinux lvm[743]: pvscan[743] PV /dev/md0 online. Feb 16 22:13:37 archlinux lvm[665]: /usr/bin/dmeventd: stat failed: No such file or directory Feb 16 22:13:37 archlinux lvm[665]: WARNING: Failed to monitor sys/pool. Feb 16 22:13:37 archlinux lvm[665]: 8 logical volume(s) in volume group "sys" now active Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@9:0.service: Succeeded. Feb 16 22:13:37 archlinux systemd[1]: Stopped LVM event activation on device 9:0. Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@253:2.service: Main process exited, code=killed, status=15/TERM Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@253:2.service: Failed with result 'signal'. Feb 16 22:13:37 archlinux systemd[1]: Stopped LVM event activation on device 253:2. Feb 16 22:13:38 spock systemd[1]: Starting LVM event activation on device 253:2... Feb 16 22:13:38 spock lvm[929]: pvscan[929] PV /dev/mapper/sys online, VG sys is complete. Feb 16 22:13:38 spock lvm[929]: pvscan[929] VG sys skip autoactivation. Feb 16 22:13:38 spock systemd[1]: Finished LVM event activation on device 253:2. [~]> LC_TIME=C sudo journalctl -b --grep pvscan | /bin/cat - -- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:31:58 CET. -- Feb 16 22:13:27 archlinux systemd[1]: Created slice system-lvm2\x2dpvscan.slice. Feb 16 22:13:27 archlinux lvm[495]: pvscan[495] PV /dev/md0 online, VG base is complete. Feb 16 22:13:27 archlinux lvm[495]: pvscan[495] VG base run autoactivation. Feb 16 22:13:28 archlinux audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=success' Feb 16 22:13:28 archlinux kernel: audit: type=1130 audit(1613510008.000:6): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=success' Feb 16 22:13:37 archlinux lvm[665]: pvscan[665] PV /dev/mapper/sys online, VG sys is complete. Feb 16 22:13:37 archlinux lvm[665]: pvscan[665] VG sys run autoactivation. Feb 16 22:13:37 archlinux lvm[743]: pvscan[743] PV /dev/md0 online. Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@9:0.service: Succeeded. Feb 16 22:13:37 archlinux audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=success' Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@253:2.service: Main process exited, code=killed, status=15/TERM Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@253:2.service: Failed with result 'signal'. Feb 16 22:13:37 archlinux audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@253:2 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=failed' Feb 16 22:13:38 spock lvm[929]: pvscan[929] PV /dev/mapper/sys online, VG sys is complete. Feb 16 22:13:38 spock lvm[929]: pvscan[929] VG sys skip autoactivation. Feb 16 22:13:38 spock audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@253:2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' ``` Note I do not see any presence of systemd-udevd.socket there. Am I missing something? Also, I see the following difference between these runs: ``` Feb 10 17:24:27 spock systemd[1]: Requested transaction contradicts existing jobs: Transaction for lvm2-pvscan@253:2.service/start is destructive (lvm2-pvscan@253:2.service has 'stop' job queued, but 'start' is included in transaction). Feb 10 17:24:27 spock systemd[1]: sys-devices-virtual-block-dm\x2d2.device: Failed to enqueue SYSTEMD_WANTS= job, ignoring: Transaction for lvm2-pvscan@253:2.service/start is destructive (lvm2-pvscan@253:2.service has 'stop' job queued, but 'start' is included in transaction). ``` I'm not sure I understand what it means, though. Likely, this is just because of first pvscan invocation being stuck. If needed, I can intentionally break one of the machines again, but to do that please let me know what kind of data I should collect. I'm not sure plain journal will reveal more, maybe, I should apply some extra debug options? Thanks. -- Oleksandr Natalenko (post-factum) _______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/