Re: ceph 17.2.6 and iam roles (pr#48030)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham <caduceus42@xxxxxxx> wrote:
>
>
> Hi,
> I see that this PR: https://github.com/ceph/ceph/pull/48030
> made it into ceph 17.2.6, as per the change log  at: https://docs.ceph.com/en/latest/releases/quincy/  That's great.
> But my scenario is as follows:
> I have two clusters set up as multisite. Because of  the lack of replication for IAM roles, we have set things up so that roles on the primary 'manually' get replicated to the secondary site via a python script. Thus, if I create a role on the primary, add/delete users or buckets from said role, the role, including the AssumeRolePolicyDocument and policies, gets pushed to the replicated site. This has served us well for three years.
> With the advent of this fix, what should I do before I upgrade to 17.2.6 (currently on 17.2.5, rocky 8)
>
> I know that in my situation, roles of the same name have different RoleIDs on the two sites. What should I do before I upgrade? Possibilities that *could* happen if i dont rectify things as we upgrade:
> 1. The different RoleIDs lead to two roles of the same name on the replicated site, perhaps with the system unable to address/look at/modify either
> 2. Roles just don't get repiicated to the second site

no replication would happen until the metadata changes again on the
primary zone. once that gets triggered, the role metadata would
probably fail to sync due to the name conflicts

>
> or other similar situations, all of which I want to avoid.
> Perhaps the safest thing to do is to remove all roles on the secondary site, upgrade, and then force a replication of roles (How would I *force* that for iAM roles if it is the correct answer?)

this removal will probably be necessary to avoid those conflicts. once
that's done, you can force a metadata full sync on the secondary zone
by running 'radosgw-admin metadata sync init' there, then restarting
its gateways. this will have to resync all of the bucket and user
metadata as well

> Here is the original bug report:
>
> https://tracker.ceph.com/issues/57364
> Thanks!
> -Chris
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux