OpenStack Cinder + Ceph, unable to remove unattached volumes, still watchers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'm using Ceph as Cinder backend. Actually it's working pretty well and some users are using this cloud platform for few weeks, but I come back from vacation and I've got some errors removing volumes, errors I didn't have few weeks ago.

Here's the situation :

Volumes are unattached, but Ceph is telling Cinder or I, when I try to remove trough rbd tools, that the volume still has watchers.

rbd --pool cinder rm volume-46e241ee-ed3f-446a-87c7-1c9df560d770
Removing image: 99% complete...failed.
rbd: error: image still has watchers
This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout. 2013-08-20 19:17:36.075524 7fedbc7e1780 -1 librbd: error removing header: (16) Device or resource busy


The kvm instances on which the volumes have been attached are now terminated. There's no lock on the volume using 'rbd lock list'.

I restarted all the monitors (3) one by one, with no better success.

From Openstack PoV, these volumes are well unattached.

How can I unlock the volumes or trace back the watcher/process ? These could be on several and different compute nodes.


Thank you for any hint,



Attachment: smime.p7s
Description: Signature cryptographique S/MIME

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux