Re: RGW Lifecycle Processing and Promote Master Process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Casey,

Thanks for your reply. To confirm I understand you correctly, you are saying:

- I want my Ceph multisite system to be in a state where lifecycle processing is running on sites/zones. 
- Using the "radosgw-admin lc reshard fix" on the non-master site to start this is OK?
- It's a bug that is causing this to not happen automatically. 

Kindest regards,
Alex

-----Original Message-----
From: Casey Bodley <cbodley@xxxxxxxxxx> 
Sent: 19 August 2020 15:43
To: Alex Hussein-Kershaw <Alex.Hussein-Kershaw@xxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re:  RGW Lifecycle Processing and Promote Master Process

NOTE: Message is from an external sender

On Fri, Aug 14, 2020 at 9:25 AM Alex Hussein-Kershaw <Alex.Hussein-Kershaw@xxxxxxxxxxxxxx> wrote:
>
> Hi,
>
> I've previously discussed some issues I've had with the RGW lifecycle processing. I've discovered that the root cause of my problem is that:
>
>   *   I'm running a multisite configuration
>      *   Life cycle processing is done on the master site each night. `radosgw-admin lc list` correctly returns all buckets with lc config.
>   *   I simulate the master site being destroyed from my VM host.
>   *   I promote the secondary site to master following the instructions here:  https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.ceph.com%2Fdocs%2Fmaster%2Fradosgw%2Fmultisite%2F&amp;data=02%7C01%7Calex.hussein-kershaw%40metaswitch.com%7C2654b06d1fb94719ed1c08d8444e36c9%7C9d9e56ebf6134ddbb27bbfcdf14b2cdb%7C1%7C0%7C637334450062742717&amp;sdata=T9X6iVKbY1SLcsW%2BE%2Bn9xHHLZASuclLbo2fpxcLXKds%3D&amp;reserved=0
>      *   The new master site isn't doing any lifecycle processing. `radosgw-admin lc list` returns empty.
>   *   I recreate a cluster and pair it with the new master site to get back to having multisite redundancy.
>      *   Neither site is doing any lifecycle processing. `radosgw-admin lc list` returns empty.
> So in the process of failover/recovery I have gone from having two paired clusters performing lifecycle processing, to two paired clusters NOT performing lifecycle processing.
>
> Is this behaviour expected? I've found `radosgw-admin lc reshard fix` will "remind" the cluster that I run it on that it needs to do lifecycle processing. Although I found no mention of having to use this in the docs, for that command the docs state it's only relevant on earlier Ceph versions. I'm running Nautilus 14.2.9.
>
> In addition, if I have two healthy clusters paired in a multisite system, and swap the master cluster by promoting the non-master, the demoted cluster seems to still continue doing lifecycle processing, while the promote does not. If I run `radosgw-admin lc reshard fix` on the promoted cluster, then both clusters seem to claim they are doing the processing. Is this a happy state to be in?
>
> Does anyone have any experience with this?
>
> Thanks,
> Alex
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
>

There's a defect in metadata sync
(https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftracker.ceph.com%2Fissues%2F44268&amp;data=02%7C01%7Calex.hussein-kershaw%40metaswitch.com%7C2654b06d1fb94719ed1c08d8444e36c9%7C9d9e56ebf6134ddbb27bbfcdf14b2cdb%7C1%7C0%7C637334450062752713&amp;sdata=Km6BSZUsTY6VLYXuwsYNR%2FmnKr60HCcFMBHerfzVxFg%3D&amp;reserved=0) which prevents buckets with lifecycle policies from being indexed for lifecycle processing on non-master zones. It sounds like the 'lc reshard fix' command is adding it back to that index for processing.

The intent is for lifecycle processing to occur independently on every zone. That's the only way to guarantee the correct result now that we have PutBucketReplication (and specifically the Filter policy) where any given zone may only hold a subset of the objects from its source.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux