Hi! I started to delete multipart aborted files (about 1200 objects) and until now it deleted 3/4 of the and the bucket usage is reduced with 2000GB from about 3000GB in total usage! So, it looks like 95% of the bucket size was in multimeta aborted files! Now, the legit question is why the lifecycle policy set to delete older files than a day is not working at all (there are still 11GB of files visible to user, expired a month ago)! Same for aborted multipart files, it is set to remove them after a day! I have executed many times per day "lc process" in the last month, we have set rgw_lc_debug_interval to something low and executed lc process. but it ignored this bucket completly as i have in logs. Any suggestion is welcome, as i bet we have other buckets in the same situation. Thank you! Paul On Mon, Jul 26, 2021 at 2:59 PM Paul JURCO <paul.jurco@xxxxxxxxx> wrote: > Hi Vidushi, > aws s3api list-object-versions shows the same files as s3cmd, so I would > say versioning is not enabled. > aws s3api get-bucket-versioning result is empty. > Is there any other method to check if versioning is enabled? > Thank you! > Paul > > On Mon, Jul 26, 2021 at 2:42 PM Vidushi Mishra <vimishra@xxxxxxxxxx> > wrote: > >> Hi Paul, >> >> Are these non-current versioned objects displayed in the bucket stats? >> Also, the LC rule applied to the bucket can only delete/expire objects >> for a normal bucket. >> In the case of a versioned bucket, the LC rule applied will expire the >> current version [create a delete-marker for every object and move the >> object version from current to non-current, thereby reflecting the same >> number of objects in bucket stats output ]. >> >> Vidushi >> >> On Mon, Jul 26, 2021 at 4:55 PM Paul JURCO <paul.jurco@xxxxxxxxx> wrote: >> >>> Hi! >>> I need some help understanding LC processing. >>> On latest versions of octopus installed (tested with 15.2.13 and 15.2.8) >>> we >>> have at least one bucket which is not having the files removed when >>> expiring. >>> The size of the bucket reported with radosgw-admin compared with the one >>> obtained with s3cmd is different, logs below: >>> ~3 TB in bucket stats and 11GB from s3cmd. >>> We tried to run manually several times the LC (lc process) but no >>> success, >>> even with bucket check (including with --fix --check-objects) didn't >>> help. >>> Configs changed are below, we have a few TB size buckets with millions of >>> objects, 6h default processing LC time was always not enough: >>> rgw_lc_debug_interval = 28800 >>> rgw_lifecycle_work_time = 00:00-23:59 >>> rgw_lc_max_worker = 5 >>> rgw_lc_max_wp_worker = 9 >>> rgw_enable_lc_threads = true >>> >>> Status of LC is always complete: >>> { >>> "bucket": >>> >>> ":feeds-bucket-dev-dc418787:3ccb869f-b0f4-4fb9-a8d7-ecf5f5e18f33.37270170.140", >>> "started": "Mon, 26 Jul 2021 07:10:14 GMT", >>> "status": "COMPLETE" >>> }, >>> It looks like versioning is not supported yet, and with aws-cli I did not >>> get any response for 's3api get-bucket-versioning'. >>> So, what to do? >>> >>> $ s3cmd -c .s3cfg-feeds-bucket-dev-dc418787 du >>> s3://feeds-bucket-dev-dc418787 >>> *12725854360 *192 objects s3://feeds-bucket-dev-dc418787/ >>> (192 files, 11GB) >>> >>> Output of 'radosgw-admin lc get' and 'bucket stats' is attached. >>> Thank you for any suggestion! >>> >>> Also, we found that the LC is removing files earlier than set for another >>> bucket, 6h after the file was added instead of 31 days. >>> >>> Paul >>> _______________________________________________ >>> ceph-users mailing list -- ceph-users@xxxxxxx >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx >>> >> _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx