RBD-Mirror Mirror Snapshot stuck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a rbd-mirror snapshot on 1 image that failed to replicate and now its not getting cleaned up. 

The cause of this was my fault based on my steps. Just trying to understand how to clean up/handle the situation. 

Here is how I got into this situation. 

- Created manual rbd snapshot on the image 
- On the remote cluster I cloned the snapshot 
- While cloned on the secondary cluster I made the mistake of deleting the snapshot on the primary 
- The subsequent mirror snapshot failed 
- I then removed the clone 
- The next mirror snapshot was successful but I was left with this mirror snapshot on the primary that I can't seem to get rid of 

root@Ccscephtest1:/var/log/ceph# rbd snap ls --all CephTestPool1/vm-100-disk-0 
SNAPID NAME SIZE PROTECTED TIMESTAMP NAMESPACE 
10082 .mirror.primary.90c53c21-6951-4218-9f07-9e983d490993.e0c63479-b09e-4c66-a65b-085b67a19907 2 TiB Thu Jan 21 07:10:09 2021 mirror (primary peer_uuids:[]) 
10243 .mirror.primary.90c53c21-6951-4218-9f07-9e983d490993.483e55aa-2f64-4bb0-ac0f-7b5aac59830e 2 TiB Thu Jan 21 07:30:08 2021 mirror (primary peer_uuids:[debf975b-ebb8-432c-a94a-d3b101e0f770]) 

I have tried deleting the snap with "rbd snap rm" like normal user created snaps, but no luck. Anyway to force the deletion? 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux