lvmetad doesn't terminate with SIGTERM if thin volume used

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After upgrading to systemd v231, my shutdowns/reboots have a 90 second
delay at the very end.  Linux kvm 4.6.4-1.

After I looked into it, I found it's due to lvmetad never terminating
when receiving a SIGTERM, and after 90 seconds, systemd performs a
SIGKILL.

systemd 231 (commit d4506129) changed the timeout on sending a SIGKILL
after a SIGTERM from 10 seconds to 90 seconds.  I think this bug has
been around for quite a while, because I've noticed shutdowns had
about a 10 second delay at the same spot that now has a 90 second
delay.

With lvmetad running with "-l all", a systemd debug dmesg log through
shutdown is attached, after running it through "grep -i lvm2".  The
full (4MB) version is here:
http://45.63.106.241/share/lvm2-lvmetad.shutdown-log2.txt

This also happens if I attempt stopping lvm2-lvmetad.  Attached is
information showing that.

Also attached is the minimal steps I used to cause the problem, using
one disk and EXT4.

If during the install I combine the 2 lvcreate commands into a single
one without using thin pools, then lvmetad terminates pretty much
immediately with SIGTERM.

==============================

# systemctl status lvm2-lvmetad
● lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service;
disabled; vendor preset: disabled)
   Active: active (running) since Wed 2016-08-03 21:36:54 EDT; 1min 14s ago
     Docs: man:lvmetad(8)
 Main PID: 398 (lvmetad)
    Tasks: 2 (limit: 4915)
   CGroup: /system.slice/lvm2-lvmetad.service
           └─398 /usr/bin/lvmetad -f

# systemctl stop lvm2-lvmetad
{{{ after 90 seconds }}}
Warning: Stopping lvm2-lvmetad.service, but it can still be activated by:
  lvm2-lvmetad.socket
# systemctl status lvm2-lvmetad
● lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service;
disabled; vendor preset: disabled)
   Active: failed (Result: signal) since Wed 2016-08-03 21:40:33 EDT; 12s ago
     Docs: man:lvmetad(8)
  Process: 398 ExecStart=/usr/bin/lvmetad -f (code=killed, signal=KILL)
 Main PID: 398 (code=killed, signal=KILL)

Aug 03 21:36:54 terra systemd[1]: Started LVM2 metadata daemon.
Aug 03 21:39:03 terra systemd[1]: Stopping LVM2 metadata daemon...
Aug 03 21:39:03 terra lvmetad[398]: Failed to accept connection.
Aug 03 21:40:33 terra systemd[1]: lvm2-lvmetad.service: State
'stop-sigterm' timed out. Killing.
Aug 03 21:40:33 terra systemd[1]: lvm2-lvmetad.service: Killing
process 398 (lvmetad) with signal SIGKILL.
Aug 03 21:40:33 terra systemd[1]: lvm2-lvmetad.service: Main process
exited, code=killed, status=9/KILL
Aug 03 21:40:33 terra systemd[1]: Stopped LVM2 metadata daemon.
Aug 03 21:40:33 terra systemd[1]: lvm2-lvmetad.service: Unit entered
failed state.
Aug 03 21:40:33 terra systemd[1]: lvm2-lvmetad.service: Failed with
result 'signal'.

====================

/dev/sda1 3.5G Linux filesystem
/dev/sda2 4.5TB Linux LVM

{ Setup LVM and filesystems }
# mkfs.ext4 -L boot /dev/sda1
# pvcreate /dev/sda2
# vgcreate disk1 /dev/sda2
{ Merging these 2 lvcreates, removing the thin volume usage makes
lvm2-lvmetad properly terminate on SIGTERM }
# lvcreate --size 500G --thinpool disk1thin disk1
# lvcreate --virtualsize 100G --name root disk1/disk1thin
# mkfs.ext4 -L /mnt /dev/disk1/main
# mount /dev/disk1/main /mnt
# mkdir /mnt/boot
# mount /dev/sda1 /mnt/boot

{ Install Arch Linux }
# vi /etc/pacman.d/mirrorlist
# pacstrap -i /mnt base syslinux gptfdisk lvm2
# arch-chroot /mnt
# vi /etc/locale.gen
# locale-gen
# locale > /etc/locale.conf
# vi /etc/nsswitch.conf
# systemctl enable systemd-resolved systemd-networkd
# ln -s /usr/share/zoneinfo/America/Detroit /etc/localtime
# hwclock --utc --systohc
# passwd
{ Add lvm2 between block and filesystems }
# vi /etc/mkinitcpio.conf
# mkinitcpio -p linux
# echo hostname > /etc/hostname
# vi /etc/systemd/network/enp31s0.network
# syslinux-install_update -i -a -m
# vi /boot/syslinux/syslinux.cfg

{ After Reboot }
# vi /etc/fstab
[    1.329731] systemd-udevd[102]: Reading rules file: /usr/lib/udev/rules.d/11-dm-lvm.rules
[    1.330295] systemd-udevd[102]: Reading rules file: /usr/lib/udev/rules.d/69-dm-lvm-metad.rules
[    2.784667] systemd-udevd[171]: LINK 'disk/by-id/lvm-pv-uuid-OtSgxO-WvMd-mzfn-FbPw-aXBv-TaT6-xHP2bM' /usr/lib/udev/rules.d/69-dm-lvm-metad.rules:38
[    2.784686] systemd-udevd[171]: RUN '/usr/bin/lvm pvscan --background --cache --activate ay --major $major --minor $minor' /usr/lib/udev/rules.d/69-dm-lvm-metad.rules:91
[    2.784778] systemd-udevd[171]: creating link '/dev/disk/by-id/lvm-pv-uuid-OtSgxO-WvMd-mzfn-FbPw-aXBv-TaT6-xHP2bM' to '/dev/sda2'
[    2.784786] systemd-udevd[171]: creating symlink '/dev/disk/by-id/lvm-pv-uuid-OtSgxO-WvMd-mzfn-FbPw-aXBv-TaT6-xHP2bM' to '../../sda2'
[    2.785414] systemd-udevd[228]: starting '/usr/bin/lvm pvscan --background --cache --activate ay --major 8 --minor 2'
[    2.787879] systemd-udevd[171]: '/usr/bin/lvm pvscan --background --cache --activate ay --major 8 --minor 2'(err) 'File descriptor 13 (/dev/null) leaked on lvm invocation. Parent PID 171: /usr/lib/systemd/systemd-udevd'
[    2.790410] systemd-udevd[171]: Process '/usr/bin/lvm pvscan --background --cache --activate ay --major 8 --minor 2' succeeded.
[    2.869537] systemd-udevd[213]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin_tmeta' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[    2.870195] systemd-udevd[216]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin_tdata' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[    3.009000] systemd-udevd[213]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin-tpool' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[    3.009641] systemd-udevd[217]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[    3.029390] systemd-udevd[216]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-main1' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[    3.031245] systemd-udevd[216]: LINK 'disk1/main1' /usr/lib/udev/rules.d/11-dm-lvm.rules:47
[    3.031268] systemd-udevd[216]: LINK 'disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' /usr/lib/udev/rules.d/13-dm-disk.rules:18
[    3.084251] systemd-udevd[216]: creating link '/dev/disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' to '/dev/dm-4'
[    3.084259] systemd-udevd[216]: creating symlink '/dev/disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' to '../../dm-4'
[    7.716837] systemd-udevd[282]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-main1' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[    7.719065] systemd-udevd[282]: LINK 'disk1/main1' /usr/lib/udev/rules.d/11-dm-lvm.rules:47
[    7.719088] systemd-udevd[282]: LINK 'disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' /usr/lib/udev/rules.d/13-dm-disk.rules:18
[    7.724487] systemd-udevd[282]: found 'b254:4' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fdm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5'
[    7.724494] systemd-udevd[282]: creating link '/dev/disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' to '/dev/dm-4'
[    7.724503] systemd-udevd[282]: preserve already existing symlink '/dev/disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' to '../../dm-4'
[    8.334568] systemd[305]: Spawned /usr/lib/systemd/system-generators/lvm2-activation-generator as 312.
[    8.440902] systemd[305]: /usr/lib/systemd/system-generators/lvm2-activation-generator succeeded.
[    8.460720] systemd[1]: run-lvm.mount: Failed to load configuration: No such file or directory
[    8.460729] systemd[1]: run-lvm-lvmetad.socket.mount: Failed to load configuration: No such file or directory
[    8.541146] systemd[1]: lvm2-lvmetad.service: Installed new job lvm2-lvmetad.service/start as 18
[    8.541193] systemd[1]: lvm2-lvmetad.socket: Installed new job lvm2-lvmetad.socket/start as 19
[    8.541517] systemd[1]: run-lvm.mount: Collecting.
[    8.541521] systemd[1]: run-lvm-lvmetad.socket.mount: Collecting.
[    8.553957] systemd[1]: lvm2-lvmetad.socket: Changed dead -> listening
[    8.553963] systemd[1]: lvm2-lvmetad.socket: Job lvm2-lvmetad.socket/start finished, result=done
[    8.553971] systemd[1]: Listening on LVM2 metadata daemon socket.
[    8.555092] systemd[1]: lvm2-lvmetad.service: About to execute: /usr/bin/lvmetad -f -l all
[    8.555364] systemd[1]: lvm2-lvmetad.service: Forked /usr/bin/lvmetad as 314
[    8.555644] systemd[1]: lvm2-lvmetad.service: Changed dead -> running
[    8.555651] systemd[1]: lvm2-lvmetad.service: Job lvm2-lvmetad.service/start finished, result=done
[    8.555667] systemd[1]: Started LVM2 metadata daemon.
[    8.555787] systemd[1]: lvm2-lvmetad.socket: Changed listening -> running
[    8.556000] systemd[314]: lvm2-lvmetad.service: Executing: /usr/bin/lvmetad -f -l all
[    8.965188] systemd-udevd[331]: Reading rules file: /usr/lib/udev/rules.d/11-dm-lvm.rules
[    8.974279] systemd-udevd[331]: Reading rules file: /usr/lib/udev/rules.d/69-dm-lvm-metad.rules
[    9.380065] systemd-udevd[334]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin_tmeta' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[    9.380476] systemd-udevd[336]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin_tdata' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[    9.380485] systemd-udevd[340]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin-tpool' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[    9.381134] systemd-udevd[351]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[    9.384687] systemd-udevd[335]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-main1' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[    9.427780] systemd-udevd[335]: LINK 'disk1/main1' /usr/lib/udev/rules.d/11-dm-lvm.rules:47
[    9.427811] systemd-udevd[335]: LINK 'disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' /usr/lib/udev/rules.d/13-dm-disk.rules:18
[    9.566813] systemd-udevd[335]: creating link '/dev/disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' to '/dev/dm-4'
[    9.566822] systemd-udevd[335]: preserve already existing symlink '/dev/disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' to '../../dm-4'
[    9.568267] systemd[1]: dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dz9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5.device: Changed dead -> plugged
[    9.834350] systemd-udevd[336]: LINK 'disk/by-id/lvm-pv-uuid-OtSgxO-WvMd-mzfn-FbPw-aXBv-TaT6-xHP2bM' /usr/lib/udev/rules.d/69-dm-lvm-metad.rules:38
[    9.834502] systemd-udevd[336]: creating link '/dev/disk/by-id/lvm-pv-uuid-OtSgxO-WvMd-mzfn-FbPw-aXBv-TaT6-xHP2bM' to '/dev/sda2'
[    9.834511] systemd-udevd[336]: preserve already existing symlink '/dev/disk/by-id/lvm-pv-uuid-OtSgxO-WvMd-mzfn-FbPw-aXBv-TaT6-xHP2bM' to '../../sda2'
[    9.880639] systemd[1]: dev-disk-by\x2did-lvm\x2dpv\x2duuid\x2dOtSgxO\x2dWvMd\x2dmzfn\x2dFbPw\x2daXBv\x2dTaT6\x2dxHP2bM.device: Changed dead -> plugged
[    9.880661] systemd[1]: lvm2-pvscan@8:2.service: Trying to enqueue job lvm2-pvscan@8:2.service/start/fail
[    9.880760] systemd[1]: system-lvm2\x2dpvscan.slice: Installed new job system-lvm2\x2dpvscan.slice/start as 92
[    9.880767] systemd[1]: lvm2-pvscan@8:2.service: Installed new job lvm2-pvscan@8:2.service/start as 89
[    9.880772] systemd[1]: lvm2-pvscan@8:2.service: Enqueued job lvm2-pvscan@8:2.service/start as 89
[    9.923444] systemd[1]: system-lvm2\x2dpvscan.slice changed dead -> active
[    9.923459] systemd[1]: system-lvm2\x2dpvscan.slice: Job system-lvm2\x2dpvscan.slice/start finished, result=done
[    9.923475] systemd[1]: Created slice system-lvm2\x2dpvscan.slice.
[    9.923824] systemd[1]: lvm2-pvscan@8:2.service: About to execute: /usr/bin/lvm pvscan --cache --activate ay 8:2
[    9.924148] systemd[1]: lvm2-pvscan@8:2.service: Forked /usr/bin/lvm as 382
[    9.924716] systemd[1]: lvm2-pvscan@8:2.service: Changed dead -> start
[    9.924755] systemd[1]: Starting LVM2 PV scan on device 8:2...
[    9.924906] systemd[382]: lvm2-pvscan@8:2.service: Executing: /usr/bin/lvm pvscan --cache --activate ay 8:2
[   10.151718] systemd-udevd[352]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin_tdata' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[   10.153713] systemd-udevd[336]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin_tmeta' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[   10.155072] systemd-udevd[337]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin-tpool' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[   10.155870] systemd-udevd[354]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[   10.270121] systemd-udevd[336]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin_tmeta' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[   10.272027] systemd-udevd[337]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin-tpool' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[   10.273874] systemd-udevd[352]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-disk1thin_tdata' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[   10.275955] systemd-udevd[354]: IMPORT '/usr/bin/dmsetup splitname --nameprefixes --noheadings --rows disk1-main1' /usr/lib/udev/rules.d/11-dm-lvm.rules:21
[   10.278582] systemd-udevd[354]: LINK 'disk1/main1' /usr/lib/udev/rules.d/11-dm-lvm.rules:47
[   10.278605] systemd-udevd[354]: LINK 'disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' /usr/lib/udev/rules.d/13-dm-disk.rules:18
[   10.280681] systemd-udevd[354]: found 'b254:4' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fdm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5'
[   10.280688] systemd-udevd[354]: creating link '/dev/disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' to '/dev/dm-4'
[   10.280697] systemd-udevd[354]: preserve already existing symlink '/dev/disk/by-id/dm-uuid-LVM-z9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5' to '../../dm-4'
[   10.285719] systemd[1]: lvm2-pvscan@8:2.service: cgroup is empty
[   10.285764] systemd[1]: Received SIGCHLD from PID 382 (lvm).
[   10.285787] systemd[1]: Child 382 (lvm) died (code=exited, status=0/SUCCESS)
[   10.285812] systemd[1]: lvm2-pvscan@8:2.service: Child 382 belongs to lvm2-pvscan@8:2.service
[   10.285831] systemd[1]: lvm2-pvscan@8:2.service: Main process exited, code=exited, status=0/SUCCESS
[   10.285929] systemd[1]: lvm2-pvscan@8:2.service: Changed start -> exited
[   10.285937] systemd[1]: lvm2-pvscan@8:2.service: Job lvm2-pvscan@8:2.service/start finished, result=done
[   10.285949] systemd[1]: Started LVM2 PV scan on device 8:2.
[   16.385138] systemd[1]: system-lvm2\x2dpvscan.slice: Installed new job system-lvm2\x2dpvscan.slice/stop as 193
[   16.385202] systemd[1]: lvm2-lvmetad.service: Installed new job lvm2-lvmetad.service/stop as 188
[   16.385436] systemd[1]: lvm2-pvscan@8:2.service: Installed new job lvm2-pvscan@8:2.service/stop as 163
[   16.386496] systemd[1]: lvm2-pvscan@8:2.service: About to execute: /usr/bin/lvm pvscan --cache 8:2
[   16.386857] systemd[1]: lvm2-pvscan@8:2.service: Forked /usr/bin/lvm as 463
[   16.400282] systemd[1]: lvm2-pvscan@8:2.service: Changed exited -> stop
[   16.400310] systemd[1]: Stopping LVM2 PV scan on device 8:2...
[   16.400362] systemd[463]: lvm2-pvscan@8:2.service: Executing: /usr/bin/lvm pvscan --cache 8:2
[   16.405188] systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/lvm2_2dpvscan_408_3a2_2eservice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=138 reply_cookie=0 error=n/a
[   16.405233] systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/lvm2_2dpvscan_408_3a2_2eservice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=139 reply_cookie=0 error=n/a
[   16.599500] systemd[1]: Received SIGCHLD from PID 463 (lvm).
[   16.599521] systemd[1]: Child 463 (lvm) died (code=exited, status=0/SUCCESS)
[   16.599545] systemd[1]: lvm2-pvscan@8:2.service: Child 463 belongs to lvm2-pvscan@8:2.service
[   16.599556] systemd[1]: lvm2-pvscan@8:2.service: Control process exited, code=exited status=0
[   16.599619] systemd[1]: lvm2-pvscan@8:2.service: Got final SIGCHLD for state stop.
[   16.599736] systemd[1]: lvm2-pvscan@8:2.service: Changed stop -> dead
[   16.599802] systemd[1]: lvm2-pvscan@8:2.service: Job lvm2-pvscan@8:2.service/stop finished, result=done
[   16.599813] systemd[1]: Stopped LVM2 PV scan on device 8:2.
[   16.600110] systemd[1]: system-lvm2\x2dpvscan.slice changed active -> dead
[   16.600179] systemd[1]: system-lvm2\x2dpvscan.slice: Job system-lvm2\x2dpvscan.slice/stop finished, result=done
[   16.600189] systemd[1]: Removed slice system-lvm2\x2dpvscan.slice.
[   16.600345] systemd[1]: lvm2-lvmetad.service: Changed running -> stop-sigterm
[   16.600359] systemd[1]: Stopping LVM2 metadata daemon...
[   17.304643] systemd[1]: dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dz9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5.device: Failed to send unit remove signal for dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dz9MxbNzkTUgcbKkSLAfArKEoOVXnfOAiKP0sgRuTugGp7iMqTle3x1PPFuafJ4s5.device: Transport endpoint is not connected
[   17.306802] systemd[1]: system-lvm2\x2dpvscan.slice: Failed to send unit remove signal for system-lvm2\x2dpvscan.slice: Transport endpoint is not connected
[   17.306871] systemd[1]: dev-disk-by\x2did-lvm\x2dpv\x2duuid\x2dOtSgxO\x2dWvMd\x2dmzfn\x2dFbPw\x2daXBv\x2dTaT6\x2dxHP2bM.device: Failed to send unit remove signal for dev-disk-by\x2did-lvm\x2dpv\x2duuid\x2dOtSgxO\x2dWvMd\x2dmzfn\x2dFbPw\x2daXBv\x2dTaT6\x2dxHP2bM.device: Transport endpoint is not connected
[   17.306886] systemd[1]: lvm2-lvmetad.socket: Failed to send unit remove signal for lvm2-lvmetad.socket: Transport endpoint is not connected
[   17.324312] systemd[1]: lvm2-lvmetad.service: Failed to send unit remove signal for lvm2-lvmetad.service: Transport endpoint is not connected
[   17.325517] systemd[1]: lvm2-pvscan@8:2.service: Failed to send unit remove signal for lvm2-pvscan@8:2.service: Transport endpoint is not connected
[  108.125304] systemd-shutdown[1]: Sending SIGKILL to PID 314 (lvmetad).
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux