I found out a few things about this. I am using 18.2.2 on the new cluster and 18.2.4 on the new unused cluster, both on Rocky Linux. 1. On a new cluster without any load (18.2.4), I can get the expiration on both sides for a non-replicated bucket just by creating the lifecycle policy.2. On a heavily loaded cluster (18.2.2) , I cannot get the expiration to work on a heavily-loaded bucket on the slave side. The bucket in questionis set to not replicate by a mutisite sync policy. Objects written to the master side get deleted as per policy. Objects written to the slaveside never get get deleted. The slave side is heavily written to for the bucket in question. radosgw-admin lc list shows PROCESSING for the non-replicated bucket on the slave zone after the first time it attempts to run, but never seems to finish on the slave zone. And objects do not appear to be deleted. If I run: radosgw-admin lc process --bucket <bucket> on the slave side, I get: ... NetHandler create_socket couldn't create socket (24) Too many open files I get that over and over again. However, if I set ulimit -n to 4096: ulimit -n 4096 The expiration starts when I run radosgw-admin. (default ulimit -n value is 1024) I do not think the issue is related to a ulimit value in the non-cli case, as it workson the new cluster that is not being written to frequently. But it makes sense for the cli command. My loaded cluster has over 1000 OSDs on each side of the multisite. Replication and expiration happens finefor buckets that are replicated. I am hoping my issue is related to 18.2.2 vs 18.2.4, which I will update to soon on my loaded cluster.Any further thoughts are appreciated. -Chris On Tuesday, September 3, 2024 at 04:31:46 PM MDT, Christopher Durham <caduceus42@xxxxxxx> wrote: Soumya, Thank you for responding. What release was this fixed in? I am uisng 18.2.2 and am about to go to 18.2.4. Yes, the sync policy shows on both zones when doing: # aws --endpoint https://master.fqdn s3api get-bucket-lifecycle-configuration --bucket <bucket name> # aws --endpoint https://slave.fqdn s3api get-bucket-lifecycle-configuration --bucket <bucket name> Also, on the slave zone, if I do: # radosgw-admin lc list It shows PROCESSING for the bucket in question with a start date of 1970-01-01, but the master zone shows that itCOMPLETED within the last day, I also tried on the slave side: # radosgw-admin lc process --bucket <bucket name> and this gives: ... NetHandler create_socket couldn't create socket (24) Too many open files over and over. I've deleted (and it deleted it on both sides) the lifecycle policy, and added it back. Itreplicated to the slave but with same results. Any help would be appreciated. Thanks -Chris On Monday, September 2, 2024 at 12:56:23 PM MDT, Soumya Koduri <skoduri@xxxxxxxxxx> wrote: On 9/2/24 8:41 PM, Christopher Durham wrote: > Asking again, does anyone know how to get this working? > I have multisite sync set up between two sites. Due to bandwidth concerns, I have disabled replication on a given bucket that houses temporary data using a multisite sync policy. This works fine. > Most of the writing to this bucket is done on the slave zone. > The bucket has a lifecycle policy set to delete objects after 3 days. But objects on the slave zone never get deleted. > How can I get this to work? I checked this use-case on the latest main and it seems to be working. Though the replication is disabled on the bucket, the objects are deleted on both the zones as per the lifecycle policy expiration rule set. Could you check using `s3api get-bucket-lifecycle-configuration` if the bucket has LC rules set on the secondary zone as well. Earlier the LC policies were not being replicated which got addressed later (not sure in which release). If the LC policy is not replicated, user has to manually set LC rules on all the zones for that bucket. Thanks, Soumya > Thanks > Chris > > > > On Friday, July 12, 2024 at 04:30:38 PM MDT, Christopher Durham <caduceus42@xxxxxxx> wrote: > > > Hi, > I have a multisite system with two sites on 18.2.2, on Rocky 8. > > I have set up a sync policy to allow replication between sites. I have also createda policy for a given bucket that prevents replication on that given bucket. This allworks just fine, and objects I create in that bucket on side A do not get replicatedto side B, and objects I create in that bucket on side B do not get replicated to side A, butreplication for all other buckets works fine. > > This is great. But I still want to have a lifecycle policy that deletes objects after say, 4 days. > If I create and install the json for this policy, via s3api put-bucket-lifecycle-configuration then only on the master side, side A, do objects get deleted after 4 days. Any objects on side B never get deleted. > What am I doing wrong? > Thanks > -Chris > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx