I have a radosgw instance (ceph 0.71-299-g5cba838 src build), running on
Ubuntu 13.10. I've been experimenting with multipart uploads (which are
working fine). However while *most* objects (from radosgw perspective)
have their storage space gc'd after a while post deletion, I'm seeing
what looks like a stubborn remnant of approx 3G that is not being
cleaned up - even after a day or so.
I'n guessing that it is from a multipart upload that I cancelled after
it did about 3G, and the parts have become semi-orphaned. Any way to
clean the up (apart from brutally removing at the rados level... which
seems a bit scary)?
Details:
I've made gc a little more agressive:
$ tail /etc/ceph/ceph.conf
; gc tweaks
rgw gc obj min wait = 300
rgw gc processor max time = 1200
rgw gc processor max period = 1800
At the radosgw level there are no objects at all - e.g using boto:
for bucket in conn.get_all_buckets():
...finds nothing. However at the rados level, scanning using the python api:
for pool in pools:
poolio = conn.open_ioctx(pool)
poolstats = poolio.get_stats()
print "{:30} {:10}".format(pool, poolstats['num_bytes'])
shows that .rgw.buckets is of size 2852126720. probing .rgw.buckets via:
for obj in poolio.list_objects():
objstat = obj.stat()
print "%s\t%s" % (obj.key, objstat[0])
Shows lots of 4M objects - presumably left over from my cancelled upload:
default.4902.1__multipart_data0/dump/big.dat.8De1Q8LjjDsHNl630fuwpWnAvc8l8-E.meta
0
default.5001.20__shadow_big.dat.uJRWhVeBFI51hR97csBNke2Sc4Dk9uo.2_176
4194304
default.5001.20__shadow_big.dat.uJRWhVeBFI51hR97csBNke2Sc4Dk9uo.3_122
4194304
default.5001.20__shadow_big.dat.uJRWhVeBFI51hR97csBNke2Sc4Dk9uo.1_159
4194304
default.5001.10__shadow_big.dat.tnGKpUbDi76i3WL1aVXFwFY_62pGvcX.2_253
4194304
...
Regards
Mark
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com