Accumulation of removed_snaps_queue After Deleting Snapshots in Ceph RBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'm encountering an issue with Ceph when using it as the backend storage for OpenStack Cinder. Specifically, after deleting RBD snapshots through Cinder, I've noticed a significant increase in the removed_snaps_queue entries within the corresponding Ceph pool. It seems to affect the pool's performance and space efficiency.

I understand that snapshot deletion in Cinder is an asynchronous operation, and Ceph itself uses a lazy deletion mechanism to handle snapshot removal. However, even after allowing sufficient time, the entries in removed_snaps_queue do not decrease as expected.

I have several questions for the community:

Are there recommended methods or best practices for managing or reducing entries in removed_snaps_queue?
Is there any tool or command that can safely clear these residual snapshot entries without affecting the integrity of active snapshots and data?
Is this issue known, and are there any bug reports or plans for fixes related to it?
Thank you very much for your assistance!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux