Hi, I'm sure that I was running 19.2.0 Yes, storageclass field of the object wasn't changing. I've done some more tests and lc seems to run fine in the background.Running radosgw-admin lc process is unreliable. I've noticed that it always failed in my test when I was trying to run it just after creating the lc. Running lc process few minutes after lc was created usually works fine.
It's probably not an issue for production, but it made my tests harder. Adam W dniu 4.02.2025 o 08:21, Soumya Koduri pisze:
Hi,A similar issue (where in LC process got stuck) was fixed and backported to Squid (in 19.1.0) -https://tracker.ceph.com/issues/65666Could you please re-check Ceph version on your system. Does the LC processing work on regular objects and on other buckets on your system?Can you verify storage-class of the objects using S3 client once LC completes its execution.Thanks, Soumya On 2/3/25 4:36 AM, Adam Prycki wrote:Hello,this weekend I was trying to test if squid still suffers from orphaned objects from multipart uploads.https://tracker.ceph.com/issues/44660I've previously encountered major issues with orphaned objects when using lifecycle transitions and server side copies. Especially on slower clusters.So far I couldn't induce orphaned objects during upload and server side copy :DI couldn't quickly test lifecycle transition because I cannot get `radosgw-admin lc process` to work on any bucket.I'm using 19.2.0 cephadm cluster with fresh rgw pools (I've removed all rgw and created a fresh instnace)Lifecycle config is very simple and based on aws example: <LifecycleConfiguration> <Rule> <ID>transition</ID> <Filter> <Prefix></Prefix> </Filter> <Status>Enabled</Status> <Transition> <Days>0</Days> <StorageClass>STANDARD_IA</StorageClass> </Transition> </Rule> </LifecycleConfiguration>I have STANDARD and STANDARD_IA storage classes in my placement. I'm testing on a bucket with 2 objects, one on STANDARD and one on STANDARD_IA.[ { "key": "default-placement", "val": { "index_pool": "n8.rgw.buckets.default-placement.index", "storage_classes": { "STANDARD": { "data_pool": "n8.rgw.buckets.default-placement.data.STANDARD" }, "STANDARD_IA": {"data_pool": "n8.rgw.buckets.default- placement.data.STANDARD_IA"} }, "data_extra_pool": "n8.rgw.buckets.default-placement.non-ec", "index_type": 0, "inline_data": true } } ] running `radosgw-admin lc process` results in this errorlifecycle: RGWLC::process() head.marker !empty() at START for shard==lc.14running `radosgw-admin lc process --bucket test1` results in no output ceph claims that lifecycle executed but no data was copied between pools radosgw-admin lc list [ { "bucket": ":test1:a0f7b0da-04a7-4053-b710-a15897233e86.334949.1", "shard": "lc.14", "started": "Sun, 02 Feb 2025 22:45:37 GMT", "status": "COMPLETE" } ]I've looked on ceph issue tracker but couldn't find anyone mentioning similar lifecycle issues on Squid. (My test buckets are not versioned.)Is this a bug or some kind of misconfiguration? Best regards Adam Prycki _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
Attachment:
smime.p7s
Description: Kryptograficzna sygnatura S/MIME
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx