Re: OpenStack Cinder + Ceph, unable to remove unattached volumes, still watchers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





I'm not the end user. It's possible that the volume has been detached without unmounting.

As the volume is unattached and the initial kvm instance is down, I was expecting the rbd volume is properly unlocked even if the guest unmount hasn't been done, like a physical disk in fact.

Which part of the Ceph thing is allways locked or marked in use ? Do we have to go to the rados object level ?
The data can be destroy.

Reboot compute nodes could clean librbd layer and clean watchers ?




________________________________________
De : Don Talton (dotalton) [dotalton@xxxxxxxxx]
Date d'envoi : mardi 20 août 2013 19:57
À : HURTEVENT VINCENT
Objet : RE: OpenStack Cinder + Ceph, unable to remove unattached volumes, still watchers

Did you unmounts them in the guest before detaching?

> -----Original Message-----
> From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-
> bounces@xxxxxxxxxxxxxx] On Behalf Of Vincent Hurtevent
> Sent: Tuesday, August 20, 2013 10:33 AM
> To: ceph-users@xxxxxxxx
> Subject:  OpenStack Cinder + Ceph, unable to remove
> unattached volumes, still watchers
>
> Hello,
>
> I'm using Ceph as Cinder backend. Actually it's working pretty well and some
> users are using this cloud platform for few weeks, but I come back from
> vacation and I've got some errors removing volumes, errors I didn't have few
> weeks ago.
>
> Here's the situation :
>
> Volumes are unattached, but Ceph is telling Cinder or I, when I try to remove
> trough rbd tools, that the volume still has watchers.
>
> rbd --pool cinder rm volume-46e241ee-ed3f-446a-87c7-1c9df560d770
> Removing image: 99% complete...failed.
> rbd: error: image still has watchers
> This means the image is still open or the client using it crashed. Try again after
> closing/unmapping it or waiting 30s for the crashed client to timeout.
> 2013-08-20 19:17:36.075524 7fedbc7e1780 -1 librbd: error removing
> header: (16) Device or resource busy
>
>
> The kvm instances on which the volumes have been attached are now
> terminated. There's no lock on the volume using 'rbd lock list'.
>
> I restarted all the monitors (3) one by one, with no better success.
>
>  From Openstack PoV, these volumes are well unattached.
>
> How can I unlock the volumes or trace back the watcher/process ? These
> could be on several and different compute nodes.
>
>
> Thank you for any hint,
>
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux