Octopus missing rgw-orphan-list tool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, 

I have been struggling a lot with radosgw buckets space wastage, which is currently stands at about 2/3 of utilised space is wasted and unaccounted for. I've tried to use the tools to find the orphan objects, but these were running in loop for weeks on without producing any results. Wido and a few others pointed out that this function is broke in was deprecated and that instead the rgw-orphan-list should be used instead. 

I have upgraded to Octopus and I have been following the documentation [ https://docs.ceph.com/docs/master/radosgw/orphans/ | https://docs.ceph.com/docs/master/radosgw/orphans/ ] . However, the ceph and radon packages for Ubuntu 18.04 do not seem to have this tool. The same applies to the bucket radoslist option to the radosgw-admin command. 

root@arh-ibstorage1-ib:~# radosgw-admin bucket radoslist 
ERROR: Unrecognized argument: 'radoslist' 
Expected one of the following: 
check 
chown 
limit 
link 
list 
reshard 
rewrite 
rm 
stats 
sync 
unlink 

root@arh-ibstorage1-ib:~# dpkg -l *rados\* 
Desired=Unknown/Install/Remove/Purge/Hold 
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend 
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) 
||/ Name Version Architecture Description 
+++-=======================-================-================-==================================================== 
un librados <none> <none> (no description available) 
ii librados2 15.2.3-1bionic amd64 RADOS distributed object store client library 
ii libradosstriper1 15.2.3-1bionic amd64 RADOS striping interface 
ii python3-rados 15.2.3-1bionic amd64 Python 3 libraries for the Ceph librados library 
ii radosgw 15.2.3-1bionic amd64 REST gateway for RADOS distributed object store 


I am running Ubuntu 18.04 with version 15.2.3 of ceph and radosgw. 

Please suggest what should I do to remove the wasted space that radosgw is creating? I've calculated the wasted space by adding up the reported usage of all the buckets and checking it agains the output of the rados df command. The buckets are using around 11TB. The rados df reports 68TB of usage with replica of 2. Rather alarming! 

Thanks for you help 


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux