Re: Loop in radosgw-admin orphan find

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

We have the same loop in our jobs in 2 clusters. Only one difference is that our cluster don't use erasure coding. The same cluster version - 10.2.2. Any ideas, what could be wrong?
Maybe, we need to upgrade? :)

BR,

On Thu, Oct 13, 2016 at 6:15 PM, Yoann Moulin <yoann.moulin@xxxxxxx> wrote:
Hello,

I run a cluster in jewel 10.2.2, I have deleted the last Bucket of a radosGW pool to delete this pool and recreate it in EC (was replicate)

Detail of the pool :

> pool 36 'erasure.rgw.buckets.data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 31459 flags hashpspool stripe_width 0

> POOLS:
>    NAME                          ID     USED       %USED     MAX AVAIL     OBJECTS
>    erasure.rgw.buckets.data      36     11838M         0        75013G         4735

After the GC, I found lots of orphan objects still remain in the pool :

> $ rados ls -p erasure.rgw.buckets.data  | egrep -c "(multipart|shadow)"
> 4735
> $ rados ls -p erasure.rgw.buckets.data  | grep -c multipart
> 2368
> $ rados ls -p erasure.rgw.buckets.data  | grep -c shadow
> 2367

example :

> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__multipart_CC-MAIN-2016-40/segments/1474738660158.61/warc/CC-MAIN-20160924173740-00147-ip-10-143-35-109.ec2.internal.warc.gz.2~WezpbEQW1C9nskvtnyAteCVoO3D255Q.29
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__multipart_CC-MAIN-2016-40/segments/1474738660158.61/warc/CC-MAIN-20160924173740-00147-ip-10-143-35-109.ec2.internal.warc.gz.2~WezpbEQW1C9nskvtnyAteCVoO3D255Q.61
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__shadow_segments/1466783398869.97/wet/CC-MAIN-20160624154958-00194-ip-10-164-35-72.ec2.internal.warc.wet.gz.2~7ru9WPCLMf9Lpi__TP1NXuYwjSU7KQK.11_1
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__shadow_crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/wet/CC-MAIN-20160624154956-00071-ip-10-164-35-72.ec2.internal.warc.wet.gz.2~7bKg6WEmNo23IQ6rd8oWF_vbaG0QAFR.6_1
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__shadow_segments/1466783398516.82/wet/CC-MAIN-20160624154958-00172-ip-10-164-35-72.ec2.internal.warc.wet.gz.2~ap5QynCJTco_L7yK6bn4M_bnHBbBe64.14_1
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__multipart_CC-MAIN-2016-40/segments/1474738662400.75/warc/CC-MAIN-20160924173742-00076-ip-10-143-35-109.ec2.internal.warc.gz.2~LEM4bpbbdiTu86rs3Ew_LFNN_oHg_m7.13
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__shadow_CC-MAIN-2016-40/segments/1474738662400.75/warc/CC-MAIN-20160924173742-00033-ip-10-143-35-109.ec2.internal.warc.gz.2~FrN02NmencyDwXavvuzwqR8M8WnWNbH.8_1
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__multipart_segments/1466783395560.14/wet/CC-MAIN-20160624154955-00118-ip-10-164-35-72.ec2.internal.warc.wet.gz.2~GqyEUdSepIxGwPOXfKLSxtS8miWGASe.3
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__multipart_segments/1466783395346.6/wet/CC-MAIN-20160624154955-00083-ip-10-164-35-72.ec2.internal.warc.wet.gz.2~cTQ86ZEmOvxYD4BUI7zW37X-JcJeMgW.19
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__multipart_CC-MAIN-2016-40/segments/1474738660158.61/warc/CC-MAIN-20160924173740-00147-ip-10-143-35-109.ec2.internal.warc.gz.2~WezpbEQW1C9nskvtnyAteCVoO3D255Q.62
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__shadow_CC-MAIN-2016-40/segments/1474738662400.75/warc/CC-MAIN-20160924173742-00259-ip-10-143-35-109.ec2.internal.warc.gz.2~1b-olF9koids0gqT9DsO0y1vAsTOasf.12_1
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__shadow_CC-MAIN-2016-40/segments/1474738660338.16/warc/CC-MAIN-20160924173740-00067-ip-10-143-35-109.ec2.internal.warc.gz.2~JxuX8v0DmsSgAr3iprPBoHx6PoTKRi6.19_1
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__multipart_segments/1466783397864.87/wet/CC-MAIN-20160624154957-00110-ip-10-164-35-72.ec2.internal.warc.wet.gz.2~q2_hY5oSoBWaSZgxh0NdK8JvxmEySPB.29
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__shadow_segments/1466783396949.33/wet/CC-MAIN-20160624154956-00000-ip-10-164-35-72.ec2.internal.warc.wet.gz.2~kUInFVpsWy23JFm9eWNPiFNKlXrjDQU.18_1
> c9724aff-5fa0-4dd9-b494-57bdb48fab4e.1371134.1__multipart_CC-MAIN-2016-40/segments/1474738662400.75/warc/CC-MAIN-20160924173742-00076-ip-10-143-35-109.ec2.internal.warc.gz.2~LEM4bpbbdiTu86rs3Ew_LFNN_oHg_m7.36

firstly, Can I delete the pool even if there is orphan object in ? Should I delete other metadata (index, data_extra) pools related to this pool
defined in the zone ? is there other data I should clean to be sure to no have side effect by removing those objects by deleting the pool
instead of deleting them with radosgw-admin orphan ?

for now, I have followed this doc to find and delete them :

https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/single/object-gateway-guide-for-ubuntu/#finding_orphan_objects

I have ran this command :

> radosgw-admin --cluster cephprod orphans find --pool=erasure.rgw.buckets.data --job-id=erasure

but it stuck on a loop, is this a normal behavior ?

example of the output I have for at least 2h :

> storing 1 entries at orphan.scan.erasure.linked.2
> storing 1 entries at orphan.scan.erasure.linked.5
> storing 1 entries at orphan.scan.erasure.linked.9
> storing 1 entries at orphan.scan.erasure.linked.19
> storing 1 entries at orphan.scan.erasure.linked.25
> storing 1 entries at orphan.scan.erasure.linked.40
> storing 1 entries at orphan.scan.erasure.linked.43
> storing 1 entries at orphan.scan.erasure.linked.47
> storing 1 entries at orphan.scan.erasure.linked.56
> storing 1 entries at orphan.scan.erasure.linked.63
> storing 1 entries at orphan.scan.erasure.linked.9
> storing 1 entries at orphan.scan.erasure.linked.25
> storing 1 entries at orphan.scan.erasure.linked.40
> storing 1 entries at orphan.scan.erasure.linked.56
> storing 1 entries at orphan.scan.erasure.linked.2
> storing 1 entries at orphan.scan.erasure.linked.5
> storing 1 entries at orphan.scan.erasure.linked.9
> storing 1 entries at orphan.scan.erasure.linked.19
> storing 1 entries at orphan.scan.erasure.linked.25
> storing 1 entries at orphan.scan.erasure.linked.40
> storing 1 entries at orphan.scan.erasure.linked.43
> storing 1 entries at orphan.scan.erasure.linked.47
> storing 1 entries at orphan.scan.erasure.linked.56
> storing 1 entries at orphan.scan.erasure.linked.63
> storing 1 entries at orphan.scan.erasure.linked.2
> storing 1 entries at orphan.scan.erasure.linked.5
> storing 1 entries at orphan.scan.erasure.linked.9
> storing 1 entries at orphan.scan.erasure.linked.19
> storing 1 entries at orphan.s can.erasure.linked.25
> storing 1 entries at orphan.scan.erasure.linked.40
> storing 1 entries at orphan.scan.erasure.linked.43
> storing 1 entries at orphan.scan.erasure.linked.47
> storing 1 entries at orphan.scan.erasure.linked.56
> storing 1 entries at orphan.scan.erasure.linked.63
> storing 1 entries at orphan.scan.erasure.linked.9
> storing 1 entries at orphan.scan.erasure.linked.25
> storing 1 entries at orphan.scan.erasure.linked.40
> storing 1 entries at orphan.scan.erasure.linked.56
> storing 1 entries at orphan.scan.erasure.linked.2
> storing 1 entries at orphan.scan.erasure.linked.5
> storing 1 entries at orphan.scan.erasure.linked.9
> storing 1 entries at orphan.scan.erasure.linked.19
> storing 1 entries at orphan.scan.erasure.linked.25
> storing 1 entries at orphan.scan.erasure.linked.40
> storing 1 entries at orphan.scan.erasure.linked.43
> storing 1 entries at orphan.scan.erasure.linked.47
> storing 1 entries at orphan.scan.erasure.linked.56
> storing 1 entries at orphan.scan.erasure.linked.63
> storing 1 entries at orphan.scan.erasure.linked.2
> storing 1 entries at orphan.scan.erasure.linked.5
> storing 1 entries at orphan.scan.erasure.linked.9
> storing 1 entries at orphan.scan.erasure.linked.19
> storing 1 entries at orphan.scan.erasure.linked.25
> storing 1 entries at orphan.scan.erasure.linked.40
> storing 1 entries at orphan.scan.erasure.linked.43
> storing 1 entries at orphan.scan.erasure.linked.47
> storing 1 entries at orphan.scan.erasure.linked.56
> storing 1 entries at orphan.scan.erasure.linked.63
> storing 1 entries at orphan.scan.erasure.linked.9
> storing 1 entries at orphan.scan.erasure.linked.25
> storing 1 entries at orphan.scan.erasure.linked.40
> storing 1 entries at orphan.scan.erasure.linked.56

Thanks for your help

--
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Marius Vaitiekūnas
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux