Just ignore rgw.none is a old bug as far I investigated just a representation bug . New versions and newer buckets doesn't have anymore rgw.none, and right now there's no way to remove section rgw.none. Im at Nautilus 14.2.11 rgw.none is not present since several versions ago... -----Mensaje original----- De: Konstantin Shalygin <k0ste@xxxxxxxx> Enviado el: martes, 1 de septiembre de 2020 10:30 Para: Jean-Sebastien Landry <jean-sebastien.landry.6@xxxxxxxxx>; ceph-users@xxxxxxx Asunto: Re: rgw.none vs quota On 8/24/20 11:20 PM, Jean-Sebastien Landry wrote: > Hi everyone, a bucket was overquota, (default quota of 300k objects per bucket), I enabled the object quota for this bucket and set a quota of 600k objects. > > We are on Luminous (12.2.12) and dynamic resharding is disabled, I manually do the resharding from 3 to 6 shards. > > Since then, radosgw-admin bucket stats report a `rgw.none` in the usage section for this bucket. > > I search the mailing-lists, bugzilla, github, it's look like I can > ignore the rgw.none stats. (0 byte object, entry left in the index marked as cancelled...) but, the num_object in rgw.none is part of the quota usage. > > I bump the quota to 800k object to workaround the problem. (without > resharding) > > Is there a way I can garbage collect the rgw.none? > Is this problem fixed in Mimic/Nautilus/Octopus? > > "usage": { > "rgw.none": { > "size": 0, > "size_actual": 0, > "size_utilized": 0, > "size_kb": 0, > "size_kb_actual": 0, > "size_kb_utilized": 0, > "num_objects": 417827 > }, > "rgw.main": { > "size": 1390778138502, > "size_actual": 1391581007872, > "size_utilized": 1390778138502, > "size_kb": 1358181776, > "size_kb_actual": 1358965828, > "size_kb_utilized": 1358181776, > "num_objects": 305637 > } > }, > Try to upgrade to 12.2.13 first. Many of RGW bugs are fixed in this release, include `--fix`,`stale instances`, `lc after reshard`, etc... k _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx