RE : OpenStack Cinder + Ceph, unable to remove unattached volumes, still watchers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Josh,

thank you for your answer, but I was in Bobtail so no listwatchers command :)

I planned a reboot of concerned compute nodes and all went fine then. I updated Ceph to last stable though.




________________________________________
De : Josh Durgin [josh.durgin@xxxxxxxxxxx]
Date d'envoi : mardi 20 août 2013 22:40
À : HURTEVENT VINCENT
Cc: Maciej Gałkiewicz; ceph-users@xxxxxxxx
Objet : Re:  OpenStack Cinder + Ceph, unable to remove unattached volumes, still watchers

On 08/20/2013 11:20 AM, Vincent Hurtevent wrote:
>
>
> I'm not the end user. It's possible that the volume has been detached
> without unmounting.
>
> As the volume is unattached and the initial kvm instance is down, I was
> expecting the rbd volume is properly unlocked even if the guest unmount
> hasn't been done, like a physical disk in fact.

Yes, detaching the volume will remove the watch regardless of the guest
having it mounted.

> Which part of the Ceph thing is allways locked or marked in use ? Do we
> have to go to the rados object level ?
> The data can be destroy.

It's a watch on the rbd header object, registered when the rbd volume
is attached, and unregistered when it is detached or 30 seconds after
the qemu/kvm process using it dies.

 From rbd info you can get the id of the image (part of the
block_name_prefix), and use the rados tool to see what ip is watching
the volume's header object, i.e.:

$ rbd info volume-name | grep prefix
         block_name_prefix: rbd_data.102f74b0dc51
$ rados -p rbd listwatchers rbd_header.102f74b0dc51
watcher=192.168.106.222:0/1029129 client.4152 cookie=1

> Reboot compute nodes could clean librbd layer and clean watchers ?

Yes, because this would kill all the qemu/kvm processes.

Josh

> ________________________________________
> De : Don Talton (dotalton) [dotalton@xxxxxxxxx]
> Date d'envoi : mardi 20 août 2013 19:57
> À : HURTEVENT VINCENT
> Objet : RE:  OpenStack Cinder + Ceph, unable to remove
> unattached volumes, still watchers
>
> Did you unmounts them in the guest before detaching?
>
>  > -----Original Message-----
>  > From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-
>  > bounces@xxxxxxxxxxxxxx] On Behalf Of Vincent Hurtevent
>  > Sent: Tuesday, August 20, 2013 10:33 AM
>  > To: ceph-users@xxxxxxxx
>  > Subject:  OpenStack Cinder + Ceph, unable to remove
>  > unattached volumes, still watchers
>  >
>  > Hello,
>  >
>  > I'm using Ceph as Cinder backend. Actually it's working pretty well
> and some
>  > users are using this cloud platform for few weeks, but I come back from
>  > vacation and I've got some errors removing volumes, errors I didn't
> have few
>  > weeks ago.
>  >
>  > Here's the situation :
>  >
>  > Volumes are unattached, but Ceph is telling Cinder or I, when I try
> to remove
>  > trough rbd tools, that the volume still has watchers.
>  >
>  > rbd --pool cinder rm volume-46e241ee-ed3f-446a-87c7-1c9df560d770
>  > Removing image: 99% complete...failed.
>  > rbd: error: image still has watchers
>  > This means the image is still open or the client using it crashed.
> Try again after
>  > closing/unmapping it or waiting 30s for the crashed client to timeout.
>  > 2013-08-20 19:17:36.075524 7fedbc7e1780 -1 librbd: error removing
>  > header: (16) Device or resource busy
>  >
>  >
>  > The kvm instances on which the volumes have been attached are now
>  > terminated. There's no lock on the volume using 'rbd lock list'.
>  >
>  > I restarted all the monitors (3) one by one, with no better success.
>  >
>  >  From Openstack PoV, these volumes are well unattached.
>  >
>  > How can I unlock the volumes or trace back the watcher/process ? These
>  > could be on several and different compute nodes.
>  >
>  >
>  > Thank you for any hint,
>  >
>  >
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux