On 5/29/19 11:22 PM, J. Eric Ivancich wrote: > Hi Wido, > > When you run `radosgw-admin gc list`, I assume you are *not* using the > "--include-all" flag, right? If you're not using that flag, then > everything listed should be expired and be ready for clean-up. If after > running `radosgw-admin gc process` the same entries appear in > `radosgw-admin gc list` then gc apparently stalled. > Not using the --include-all in both cases. GC seems to stall and doesn't do anything when looking at it with --debug-rados=10 > There were a few bugs within gc processing that could prevent it from > making forward progress. They were resolved with a PR (master: > https://github.com/ceph/ceph/pull/26601 ; mimic backport: > https://github.com/ceph/ceph/pull/27796). Unfortunately that code was > backported after the 13.2.5 release, but it is in place for the 13.2.6 > release of mimic. > Thanks! I'll might grab some packages from Shaman to give GC a try. Wido > Eric > > > On 5/29/19 3:19 AM, Wido den Hollander wrote: >> Hi, >> >> I've got a Ceph cluster with this status: >> >> health: HEALTH_WARN >> 3 large omap objects >> >> After looking into it I see that the issue comes from objects in the >> '.rgw.gc' pool. >> >> Investigating it I found that the gc.* objects have a lot of OMAP keys: >> >> for OBJ in $(rados -p .rgw.gc ls); do >> echo $OBJ >> rados -p .rgw.gc listomapkeys $OBJ|wc -l >> done >> >> I then found out that on average these objects have about 100k of OMAP >> keys each, but two stand out and have about 3M OMAP keys. >> >> I can list the GC with 'radosgw-admin gc list' and this yields a JSON >> which is a couple of MB in size. >> >> I ran: >> >> $ radosgw-admin gc process >> >> That runs for hours and then finishes, but the large list of OMAP keys >> stays. >> >> Running Mimic 13.3.5 on this cluster. >> >> Has anybody seen this before? >> >> Wido >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com