RGW orphans search

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, 

I am trying to clean up some wasted space (about 1/3 of used space in the rados pool is currently unaccounted for including the replication level). I've started the search command 20 days ago ( radosgw-admin orphans find --pool=.rgw.buckets --job-id=ophans_clean1 --yes-i-really-mean-it ) and it's still showing me the same thing: 

[ 
{ 
"orphan_search_state": { 
"info": { 
"orphan_search_info": { 
"job_name": "ophans_clean1", 
"pool": ".rgw.buckets", 
"num_shards": 64, 
"start_time": "2020-05-10 21:39:28.913405Z" 
} 
}, 
"stage": { 
"orphan_search_stage": { 
"search_stage": "iterate_bucket_index", 
"shard": 0, 
"marker": "" 
} 
} 
} 
} 
] 


The output of the command keeps showing this (hundreds of thousands of lines): 

storing 1 entries at orphan.scan.ophans_clean1.linked.60 

The total size of the pool is around 30TB and the buckets usage is just under 10TB. The replica is 2. The activity on the cluster has spiked up since I've started the command (currently seeing between 10-20K iops compared to a typical 2-5k iops). 

Has anyone experienced this behaviour? It seems like the command should have finished by now with only 30TB of used up space. I am running 13.2.10-1xenial version of ceph. 

Cheers 

Andrei 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux