Thx Mat for fast response, today night at datacenter adding more OSD for S3. Will change the params and come back for share experience. Regards Manuel -----Mensaje original----- De: Matt Benjamin <mbenjami@xxxxxxxxxx> Enviado el: domingo, 24 de mayo de 2020 22:47 Para: EDH - Manuel Rios <mriosfer@xxxxxxxxxxxxxxxx> CC: ceph-users@xxxxxxx Asunto: Re: RGW Garbage Collector Hi Manuel, rgw_gc_obj_min_wait -- yes, this is how you control how long rgw waits before removing the stripes of deleted objects the following are more gc performance and proportion of available iops: rgw_gc_processor_max_time -- controls how long gc runs once scheduled; a large value might be 3600 rgw_gc_processor_period -- sets the gc cycle; smaller is more frequent If you want to make gc more aggressive when it is running, set the following (can be increased), which more than doubles the : rgw_gc_max_concurrent_io = 20 rgw_gc_max_trim_chunk = 32 If you want to increase gc fraction of total rgw i/o, increase these (mostly, concurrent_io). regards, Matt On Sun, May 24, 2020 at 4:02 PM EDH - Manuel Rios <mriosfer@xxxxxxxxxxxxxxxx> wrote: > > Hi, > > Im looking for any experience optimizing garbage collector with the next configs: > > global advanced rgw_gc_obj_min_wait > global advanced rgw_gc_processor_max_time > global advanced rgw_gc_processor_period > > By default gc expire objects within 2 hours, we're looking to define expire in 10 minutes as our S3 cluster got heavy uploads and deletes. > > Are those params usable? For us doesn't have sense store delete objects 2 hours in a gc. > > Regards > Manuel > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an > email to ceph-users-leave@xxxxxxx > -- Matt Benjamin Red Hat, Inc. 315 West Huron Street, Suite 140A Ann Arbor, Michigan 48103 http://www.redhat.com/en/technologies/storage tel. 734-821-5101 fax. 734-769-8938 cel. 734-216-5309 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx