Leaked clone objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
 
  
Hello, 
  
Over the last few weeks, we have observed a abnormal increase of a pool's data usage (by a factor of 2). It turns out that we are hit by this bug [1]. 
  
In short, if you happened to take pool snapshots and removed them by using the following command 
  
'ceph osd pool rmsnap {pool-name} {snap-name}' 
  
instead of using this command 
  
'rados -p {pool-name} rmsnap {snap-name}' 
  
then you may have leaked clone objects (not trimmed) in your cluster, occupying space that you can't reclaim. 
  
You may have such leaked objects if (not exclusively): 
  
- 'rados df' reports CLONES for pools with no snapshots 
- 'rgw-orphan-list' (if RWG pools) reports objects that you can't 'stat' but for which 'listsnaps' shows a cloneid. 
  
'ceph osd pool force-remove-snap {pool-name}' should have the OSDs re-trim these leaked clone objects when [2] makes it to quincy, reef, squid (and hopefully pacific). 
  
Hope this helps, 
  
Regards, 
Frédéric. 
  
[1] https://tracker.ceph.com/issues/64646 
[2] https://github.com/ceph/ceph/pull/53545  
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux