Re: RGW orphans search

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Andrei,

Orphans find code is not running. Will be deprecated in next reléase maybe 14.2.10 

Check:  https://docs.ceph.com/docs/master/radosgw/orphans/

Stop progress is bugged.

You got the same issue than us, multiparts are not being clean due a sharding bugs.

Or fast solution for recover 100TB , s3cmd sync to a other bucket and them delete the old bucket.

Not transparent at all but Works.

Other recomendation: disable Dynamic shard and put a fixed shard number at your config.

Regards
Manuel
 

-----Mensaje original-----
De: Andrei Mikhailovsky <andrei@xxxxxxxxxx> 
Enviado el: sábado, 30 de mayo de 2020 13:12
Para: ceph-users <ceph-users@xxxxxxx>
Asunto:  RGW orphans search

Hello, 

I am trying to clean up some wasted space (about 1/3 of used space in the rados pool is currently unaccounted for including the replication level). I've started the search command 20 days ago ( radosgw-admin orphans find --pool=.rgw.buckets --job-id=ophans_clean1 --yes-i-really-mean-it ) and it's still showing me the same thing: 

[
{
"orphan_search_state": {
"info": {
"orphan_search_info": {
"job_name": "ophans_clean1",
"pool": ".rgw.buckets",
"num_shards": 64,
"start_time": "2020-05-10 21:39:28.913405Z" 
}
},
"stage": {
"orphan_search_stage": {
"search_stage": "iterate_bucket_index",
"shard": 0,
"marker": "" 
}
}
}
}
] 


The output of the command keeps showing this (hundreds of thousands of lines): 

storing 1 entries at orphan.scan.ophans_clean1.linked.60 

The total size of the pool is around 30TB and the buckets usage is just under 10TB. The replica is 2. The activity on the cluster has spiked up since I've started the command (currently seeing between 10-20K iops compared to a typical 2-5k iops). 

Has anyone experienced this behaviour? It seems like the command should have finished by now with only 30TB of used up space. I am running 13.2.10-1xenial version of ceph. 

Cheers 

Andrei
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux