2017-05-12 2:55 GMT+00:00 Ben Hines <bhines@xxxxxxxxx>: > It actually seems like these values aren't being honored, i actually see > many more objects being processed by gc (as well as kraken object > lifecycle), even though my values are at the default 32 objs. > > 19:52:44 root@<> /var/run/ceph $ ceph --admin-daemon > /var/run/ceph/ceph-client.<>.asok config show | grep 'gc\|lc' > "rgw_enable_gc_threads": "true", > "rgw_enable_lc_threads": "true", > "rgw_lc_lock_max_time": "60", > "rgw_lc_max_objs": "32", > "rgw_lc_debug_interval": "-1", > "rgw_gc_max_objs": "32", > "rgw_gc_obj_min_wait": "7200", > "rgw_gc_processor_max_time": "3600", > "rgw_gc_processor_period": "3600", > "rgw_objexp_gc_interval": "600", > > > gc: (this is all within one hour, so must be within one cycle) > > 19:49:17 root@<> /var/log/ceph $ grep 'gc::process: removing' client.<>.log > | wc -l > 6908 > > lifecycle: > > 19:50:22 root@<> /var/log/ceph $ grep DELETED client.<>.log | wc -l > 741 > > Yehuda, do you know if these settings are honored still? (personally i dont > want to limit it at all, I would rather it delete as many objects as it can > within its runtime) rgw_gc_max_objs isn't about "objects to be garbage collected", it configures the number of objects in the gc pool that are used to collect the information about the to-be-gc-d objects. The description of this variable in the docs seems to be pretty misleading. Also, having the default be a prime number might be a better choice, seeing that the code caps it at a pretty large prime anyway. > Also curious if lifecycle deleted objects go through the garbage collector, > or are they just immediately deleted? > > -Ben > > On Mon, Apr 10, 2017 at 2:46 PM, Deepak Naidu <dnaidu@xxxxxxxxxx> wrote: >> >> I still see the issue, where the space is not getting deleted. gc process >> works sometimes but sometimes it does nothing to clean the GC, as there are >> no items in the GC, but still the space is used on the pool. >> >> >> >> Any ideas what the ideal config for automatic deletion of these objects >> after the files are deleted. >> >> Currently set to >> >> >> >> "rgw_gc_max_objs": "97", >> >> >> >> -- >> >> Deepak >> >> >> >> From: Deepak Naidu >> Sent: Wednesday, April 05, 2017 2:56 PM >> To: Ben Hines >> Cc: ceph-users >> Subject: RE: ceph df space for rgw.buckets.data shows used >> even when files are deleted >> >> >> >> Thanks Ben. >> >> >> >> Is there are tuning param I need to use to fasten the process. >> >> >> >> "rgw_gc_max_objs": "32", >> >> "rgw_gc_obj_min_wait": "7200", >> >> "rgw_gc_processor_max_time": "3600", >> >> "rgw_gc_processor_period": "3600", >> >> >> >> >> >> -- >> >> Deepak >> >> >> >> >> >> >> >> From: Ben Hines [mailto:bhines@xxxxxxxxx] >> Sent: Wednesday, April 05, 2017 2:41 PM >> To: Deepak Naidu >> Cc: ceph-users >> Subject: Re: ceph df space for rgw.buckets.data shows used >> even when files are deleted >> >> >> >> Ceph's RadosGW uses garbage collection by default. >> >> >> >> Try running 'radosgw-admin gc list' to list the objects to be garbage >> collected, or 'radosgw-admin gc process' to trigger them to be deleted. >> >> >> >> -Ben >> >> >> >> On Wed, Apr 5, 2017 at 12:15 PM, Deepak Naidu <dnaidu@xxxxxxxxxx> wrote: >> >> Folks, >> >> >> >> Trying to test the S3 object GW. When I try to upload any files the space >> is shown used(that’s normal behavior), but when the object is deleted it >> shows as used(don’t understand this). Below example. >> >> >> >> Currently there is no files in the entire S3 bucket, but it still shows >> space used. Any insight is appreciated. >> >> >> >> ceph version 10.2.6 >> >> >> >> NAME ID USED >> %USED MAX AVAIL OBJECTS >> >> default.rgw.buckets.data 49 51200M 1.08 4598G >> 12800 >> >> >> >> >> >> -- >> >> Deepak >> >> ________________________________ >> >> This email message is for the sole use of the intended recipient(s) and >> may contain confidential information. Any unauthorized review, use, >> disclosure or distribution is prohibited. If you are not the intended >> recipient, please contact the sender by reply email and destroy all copies >> of the original message. >> >> ________________________________ >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >> > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com