I've seen that the cluster started cleaning up on sunday, which made the bucketsize shrink again. I've changed the garbagecollection settings to:
rgw_gc_max_objs = 67
rgw_gc_obj_min_wait = 1800
rgw_gc_processor_max_time = 1800
rgw_gc_processor_period = 1800
But it looks like this actually broke stuff. A file that was deleted yesterday still is not removed:
2017-10-24 23:38:22.595341 7ff3ec48c700 1 civetweb: 0x55e8b3ee0000: 10.0.0.35 - - [24/Oct/2017:23:38:21 +0200] "DELETE /qnapnas/VZDump/VZDump-Files/dump/vzdump-qemu-106-2017_10_15-03_06_18.vma.lzo HTTP/1.1" 1 0 - QNAP,TS-431XU,1.2.420
radosgw-admin gc list | grep -c vzdump-qemu-106-2017_10_15-03_06_18.vma.lzo
13980
Met vriendelijke groeten,
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | info@xxxxxxxx
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | info@xxxxxxxx
Van:
Christian Wuerdig <christian.wuerdig@xxxxxxxxx>
Aan: Mark Schouten <mark@xxxxxxxx>
Cc: Ceph Users <ceph-users@xxxxxxxxxxxxxx>
Verzonden: 25-10-2017 0:28
Onderwerp: Re: Reported bucket size incorrect (Luminous)
Aan: Mark Schouten <mark@xxxxxxxx>
Cc: Ceph Users <ceph-users@xxxxxxxxxxxxxx>
Verzonden: 25-10-2017 0:28
Onderwerp: Re: Reported bucket size incorrect (Luminous)
What version of Ceph are you using? There were a few bugs leaving
behind orphaned objects (e.g. http://tracker.ceph.com/issues/18331 and
http://tracker.ceph.com/issues/10295). If that's your problem then
there is a tool for finding these objects so you can then manually
delete them - have a google search for rgw orphan find
On Sat, Oct 21, 2017 at 2:40 AM, Mark Schouten <mark@xxxxxxxx> wrote:
> Hi,
>
> I have a bucket that according to radosgw-admin is about 8TB, even though
> it's really only 961GB.
>
> I have ran radosgw-admin gc process, and that completes quite fast.
> root@osdnode04:~# radosgw-admin gc process
> root@osdnode04:~# radosgw-admin gc list
> []
>
> {
> "bucket": "qnapnas",
> "zonegroup": "8e81f1e2-c173-4b8d-b421-6ccabdf69f2e",
> "placement_rule": "default-placement",
> "explicit_placement": {
> "data_pool": "default.rgw.buckets.data",
> "data_extra_pool": "default.rgw.buckets.non-ec",
> "index_pool": "default.rgw.buckets.index"
> },
> "id": "1c19a332-7ffc-4472-b852-ec4a143785cc.19675875.3",
> "marker": "1c19a332-7ffc-4472-b852-ec4a143785cc.19675875.3",
> "index_type": "Normal",
> "owner": "DB0339$REDACTED",
> "ver": "0#963948",
> "master_ver": "0#0",
> "mtime": "2017-08-23 12:15:50.203650",
> "max_marker": "0#",
> "usage": {
> "rgw.main": {
> "size": 8650431493893,
> "size_actual": 8650431578112,
> "size_utilized": 8650431493893,
> "size_kb": 8447687006,
> "size_kb_actual": 8447687088,
> "size_kb_utilized": 8447687006,
> "num_objects": 227080
> },
> "rgw.multimeta": {
> "size": 0,
> "size_actual": 0,
> "size_utilized": 0,
> "size_kb": 0,
> "size_kb_actual": 0,
> "size_kb_utilized": 0,
> "num_objects": 17
> }
> },
> "bucket_quota": {
> "enabled": false,
> "check_on_raw": false,
> "max_size": -1024,
> "max_size_kb": 0,
> "max_objects": -1
> }
> },
>
>
> Can anybody explain what's wrong?
>
>
> Met vriendelijke groeten,
>
> --
> Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
> Mark Schouten | Tuxis Internet Engineering
> KvK: 61527076 | http://www.tuxis.nl/
> T: 0318 200208 | info@xxxxxxxx
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
Attachment:
smime.p7s
Description: Electronic Signature S/MIME
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com