Hello,
this weekend I was trying to test if squid still suffers from orphaned
objects from multipart uploads.
https://tracker.ceph.com/issues/44660
I've previously encountered major issues with orphaned objects when
using lifecycle transitions and server side copies. Especially on slower
clusters.
So far I couldn't induce orphaned objects during upload and server side
copy :D
I couldn't quickly test lifecycle transition because I cannot get
`radosgw-admin lc process` to work on any bucket.
I'm using 19.2.0 cephadm cluster with fresh rgw pools (I've removed all
rgw and created a fresh instnace)
Lifecycle config is very simple and based on aws example:
<LifecycleConfiguration>
<Rule>
<ID>transition</ID>
<Filter>
<Prefix></Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<Days>0</Days>
<StorageClass>STANDARD_IA</StorageClass>
</Transition>
</Rule>
</LifecycleConfiguration>
I have STANDARD and STANDARD_IA storage classes in my placement. I'm
testing on a bucket with 2 objects, one on STANDARD and one on STANDARD_IA.
[
{
"key": "default-placement",
"val": {
"index_pool": "n8.rgw.buckets.default-placement.index",
"storage_classes": {
"STANDARD": {
"data_pool": "n8.rgw.buckets.default-placement.data.STANDARD"
},
"STANDARD_IA": {
"data_pool": "n8.rgw.buckets.default-placement.data.STANDARD_IA"
}
},
"data_extra_pool": "n8.rgw.buckets.default-placement.non-ec",
"index_type": 0,
"inline_data": true
}
}
]
running `radosgw-admin lc process` results in this error
lifecycle: RGWLC::process() head.marker !empty() at START for shard==lc.14
running `radosgw-admin lc process --bucket test1` results in no output
ceph claims that lifecycle executed but no data was copied between pools
radosgw-admin lc list
[
{
"bucket": ":test1:a0f7b0da-04a7-4053-b710-a15897233e86.334949.1",
"shard": "lc.14",
"started": "Sun, 02 Feb 2025 22:45:37 GMT",
"status": "COMPLETE"
}
]
I've looked on ceph issue tracker but couldn't find anyone mentioning
similar lifecycle issues on Squid. (My test buckets are not versioned.)
Is this a bug or some kind of misconfiguration?
Best regards
Adam Prycki
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx