[lvmlockd] recovery lvmlockd after kill_vg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

  AFAIK once sanlock can not access lease storage, it will run
"kill_vg" to lvmlockd, and the standard process should be deactivate
logical volumes and drop vg locks.

  But sometimes the storage will recovery after kill_vg(and before we
deactivate or drop lock), and then it will prints "storage failed for
sanlock leases" on lvm commands like this:

[root@dev1-2 ~]# vgck 71b1110c97bd48aaa25366e2dc11f65f
  WARNING: Not using lvmetad because config setting use_lvmetad=0.
  WARNING: To avoid corruption, rescan devices to make changes visible
(pvscan --cache).
  VG 71b1110c97bd48aaa25366e2dc11f65f lock skipped: storage failed for
sanlock leases
  Reading VG 71b1110c97bd48aaa25366e2dc11f65f without a lock.

  so what should I do to recovery this, (better) without affect
volumes in using?

  I find a way but it seems very tricky: save "lvmlockctl -i" output,
run lvmlockctl -r vg and then activate volumes as the previous output.

  Do we have an "official" way to handle this? Since it is pretty
common that when I find lvmlockd failed, the storage has already
recovered.

Thanks,
Damon Wang

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux