I have md raid 5 which I tried to stop, but that failed. I pulled the drives off the SATA bus and re-initialized. I realized the initial failure to stop was likely LVM, which I forgot to remove. LVM is no longer usable on the system and LVM processes (vgdisplay, vgscan and lvmremove) are so hung they will not obey a kill -9. LVM appears deadlocked on /var/lock/lvm/V_vg-raid1. I really don't want to reboot as it is a KVM virtual host. Is there any way to fix this? lsof | grep md127:[/B] md127_rai 1419 root cwd DIR 253,0 4096 2 / md127_rai 1419 root rtd DIR 253,0 4096 2 / md127_rai 1419 root txt unknown /proc/1419/exe vgdisplay 26178 root 4r BLK 9,127 0t0 12181 /dev/md127 vgscan 26297 root 4r BLK 9,127 0t0 12181 /dev/md127 lvremove 30269 root 4r BLK 9,127 0t0 12181 /dev/md127 vgs 30677 root 4r BLK 9,127 0t0 12181 /dev/md127 vgreduce -vvvv --removemissing /dev/mapper/vg--raid1:[/B] #lvmcmdline.c:1045 Processing: vgreduce -vvvv --removemissing /dev/mapper/vg--raid1 #lvmcmdline.c:1048 O_DIRECT will be used #config/config.c:996 Setting global/locking_type to 1 #config/config.c:996 Setting global/wait_for_locks to 1 #locking/locking.c:242 File-based locking selected. #config/config.c:973 Setting global/locking_dir to /var/lock/lvm #libdm-common.c:462 Preparing SELinux context for /var/lock/lvm to system_u:object_r:lvm_lock_t:s0. #libdm-common.c:465 Resetting SELinux context to default value. #vgreduce.c:246 Finding volume group "vg-raid1" #locking/file_locking.c:235 Locking /var/lock/lvm/V_vg-raid1 WB #libdm-common.c:462 Preparing SELinux context for /var/lock/lvm/V_vg-raid1 to system_u:object_r:lvm_lock_t:s0. #locking/file_locking.c:141 _do_flock /var/lock/lvm/V_vg-raid1:aux WB #locking/file_locking.c:141 _do_flock /var/lock/lvm/V_vg-raid1 WB ^C#locking/file_locking.c:118 CTRL-c detected: giving up waiting for lock #locking/file_locking.c:163 /var/lock/lvm/V_vg-raid1: flock failed: Interrupted system call #locking/file_locking.c:51 _undo_flock /var/lock/lvm/V_vg-raid1:aux #libdm-common.c:465 Resetting SELinux context to default value. #locking/file_locking.c:249 <backtrace> #locking/file_locking.c:290 <backtrace> #locking/locking.c:396 <backtrace> #locking/locking.c:465 <backtrace> #metadata/metadata.c:3927 Can't get lock for vg-raid1 #metadata/vg.c:53 Allocated VG (null) at 0x29d5c90. #metadata/vg.c:68 Freeing VG (null) at 0x29d5c90. #vgreduce.c:272 Trying to open VG vg-raid1 for recovery... #locking/file_locking.c:235 Locking /var/lock/lvm/V_vg-raid1 WB #libdm-common.c:462 Preparing SELinux context for /var/lock/lvm/V_vg-raid1 to system_u:object_r:lvm_lock_t:s0. #locking/file_locking.c:141 _do_flock /var/lock/lvm/V_vg-raid1:aux WB #locking/file_locking.c:141 _do_flock /var/lock/lvm/V_vg-raid1 WB ^C#locking/file_locking.c:118 CTRL-c detected: giving up waiting for lock #locking/file_locking.c:163 /var/lock/lvm/V_vg-raid1: flock failed: Interrupted system call #locking/file_locking.c:51 _undo_flock /var/lock/lvm/V_vg-raid1:aux #libdm-common.c:465 Resetting SELinux context to default value. #locking/file_locking.c:249 <backtrace> #locking/file_locking.c:290 <backtrace> #locking/locking.c:396 <backtrace> #locking/locking.c:465 <backtrace> #metadata/metadata.c:3927 Can't get lock for vg-raid1 #metadata/vg.c:53 Allocated VG (null) at 0x29d5f80. #vgreduce.c:280 <backtrace> #mm/memlock.c:389 Unlock: Memlock counters: locked:0 critical:0 daemon:0 suspended:0 #activate/fs.c:486 Syncing device names #locking/file_locking.c:290 <backtrace> #locking/locking.c:396 <backtrace> #cache/lvmcache.c:328 Internal error: Attempt to unlock unlocked VG vg-raid1. #locking/locking.c:465 <backtrace> #metadata/vg.c:68 Freeing VG (null) at 0x29d5f80. _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/