ceph rbd volume can't remove because image still has watchers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Make sure the volume  "volume-17d9397b-d6e5-45e0-80fa-4bc7b7998842? does no have a snapshot or cloned linked to it  , it might sometimes cause problems while deletion.


- Karan Singh

On 07 Aug 2014, at 08:55, ??? <yangwanyuan8861 at gmail.com> wrote:

> Hi all?
>     we use ceph rbd with openstack ?recently there are some  dirty data in my cinder-volume databases such as volumes status like error-deleting. So we need manually delete this volumes?
>     but when I delete the volume on ceph node?ceph tell me this error
> 
>       [root at ceph-node3 ~]# rbd -p glance rm volume-17d9397b-d6e5-45e0-80fa-4bc7b7998842    
>         Removing image: 99% complete...failed.
>         rbd: error: image still has watchers
>         This means the image is still open or the client using it crashed. Try again after       closing/unmapping it or waiting 30s for the crashed client to timeout.
>         2014-08-07 11:25:42.793275 7faf8c58b760 -1 librbd: error removing header: (16) Device or resource busy
> 
> 
>    I google this problem and  find this  http://comments.gmane.org/gmane.comp.file-systems.ceph.user/9767
>    I did it and got this:
>     
>      [root at ceph-node3 ~]# rbd info -p glance volume-17d9397b-d6e5-45e0-80fa-4bc7b7998842        
>         rbd image 'volume-17d9397b-d6e5-45e0-80fa-4bc7b7998842':
>         size 51200 MB in 12800 objects
>         order 22 (4096 kB objects)
>         block_name_prefix: rbd_data.3b1464f8e96d5
>         format: 2
>         features: layering
>      [root at ceph-node3 ~]# rados -p glance listwatchers rbd_header.3b1464f8e96d5 
>         watcher=192.168.39.116:0/1032797 client.252302 cookie=1
> 
>   192.168.39.116 is my nova compute node ,so i can't reboot this server,
>   what can i do to delete this volume without reboot my  compute-node?
> 
>   my ceph version is 0.72.1.
> 
>  thanks very much!
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140807/24875322/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux