Re: ceph can list volumes from a pool but can not remove the volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



https://docs.ceph.com/en/reef/rbd/rbd-snapshot/ should give you everything you need.

Sounds like maybe you have snapshots / clones that have left the parent lingering as a tombstone?

Start with

	rbd children volume-ssd/volume-8a30615b-1c91-4e44-8482-3c7d15026c28
	rbd info volume-ssd/volume-8a30615b-1c91-4e44-8482-3c7d15026c28
	rbd du volume-ssd/volume-8a30615b-1c91-4e44-8482-3c7d15026c28

That looks like the only volume in that pool?  If targeted cleanup doesn’t work, you could just delete the whole pool, but triple check everything before taking action here.


> On Sep 25, 2024, at 1:50 PM, bryansoong21@xxxxxxxxx wrote:
> 
> We have a volume in our cluster:
> 
> [root@xxxxxxxxxx-a ~]# rbd ls volume-ssd
> volume-8a30615b-1c91-4e44-8482-3c7d15026c28
> 
> [root@xxxxxxxxxx-a ~]# rbd rm volume-ssd/volume-8a30615b-1c91-4e44-8482-3c7d15026c28
> Removing image: 0% complete...failed.
> rbd: error opening image volume-8a30615b-1c91-4e44-8482-3c7d15026c28: (2) No such file or directory
> rbd: image has snapshots with linked clones - these must be deleted or flattened before the image can be removed.
> 
> Any ideas on how can I remove the volume? Thanks
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux