luminous - 12.2.1 - stale RBD locks after client crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello ceph users and developers,

I've stumbled upon a bit strange problem with Luminous.

One of our servers running multiple QEMU clients crashed.
When we tried restarting those on another cluster node,
we got lots of fsck errors, disks seemed to return "physical"
block errors. I figured this out to be stale RBD locks on volumes
from the crashed machine. Wnen I removed the locks, everything
started to work. (for some volumes, I was fixing those the another
day after crash, so it was >10-15hours later)

My question is, it this a bug or feature? I mean, after the client
crashes, should locks somehow expire, or they need to be removed
by hand? I don't remember having this issue with older ceph versions,
but I suppose we didn't  have exclusive locks feature enabled..

I'll be very grateful for any reply

with best regards

nik
-- 
-------------------------------------
Ing. Nikola CIPRICH
LinuxBox.cz, s.r.o.
28.rijna 168, 709 00 Ostrava

tel.:   +420 591 166 214
fax:    +420 596 621 273
mobil:  +420 777 093 799
www.linuxbox.cz

mobil servis: +420 737 238 656
email servis: servis@xxxxxxxxxxx
-------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux