What version of Ceph are you using? There were a few bugs leaving behind orphaned objects (e.g. http://tracker.ceph.com/issues/18331 and http://tracker.ceph.com/issues/10295). If that's your problem then there is a tool for finding these objects so you can then manually delete them - have a google search for rgw orphan find On Sat, Oct 21, 2017 at 2:40 AM, Mark Schouten <mark@xxxxxxxx> wrote: > Hi, > > I have a bucket that according to radosgw-admin is about 8TB, even though > it's really only 961GB. > > I have ran radosgw-admin gc process, and that completes quite fast. > root@osdnode04:~# radosgw-admin gc process > root@osdnode04:~# radosgw-admin gc list > [] > > { > "bucket": "qnapnas", > "zonegroup": "8e81f1e2-c173-4b8d-b421-6ccabdf69f2e", > "placement_rule": "default-placement", > "explicit_placement": { > "data_pool": "default.rgw.buckets.data", > "data_extra_pool": "default.rgw.buckets.non-ec", > "index_pool": "default.rgw.buckets.index" > }, > "id": "1c19a332-7ffc-4472-b852-ec4a143785cc.19675875.3", > "marker": "1c19a332-7ffc-4472-b852-ec4a143785cc.19675875.3", > "index_type": "Normal", > "owner": "DB0339$REDACTED", > "ver": "0#963948", > "master_ver": "0#0", > "mtime": "2017-08-23 12:15:50.203650", > "max_marker": "0#", > "usage": { > "rgw.main": { > "size": 8650431493893, > "size_actual": 8650431578112, > "size_utilized": 8650431493893, > "size_kb": 8447687006, > "size_kb_actual": 8447687088, > "size_kb_utilized": 8447687006, > "num_objects": 227080 > }, > "rgw.multimeta": { > "size": 0, > "size_actual": 0, > "size_utilized": 0, > "size_kb": 0, > "size_kb_actual": 0, > "size_kb_utilized": 0, > "num_objects": 17 > } > }, > "bucket_quota": { > "enabled": false, > "check_on_raw": false, > "max_size": -1024, > "max_size_kb": 0, > "max_objects": -1 > } > }, > > > Can anybody explain what's wrong? > > > Met vriendelijke groeten, > > -- > Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ > Mark Schouten | Tuxis Internet Engineering > KvK: 61527076 | http://www.tuxis.nl/ > T: 0318 200208 | info@xxxxxxxx > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com