Re: Pacific 16.2.6: Trying to get an RGW running for a scond zonegroup in an existing realm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well, looks like not many people have tried this.
And to me it looks like a bug/omission in "ceph orch apply rgw".

After digging through the setup I figured out that the unit.run file for the new rgw.zone21 process/container doesn't get the --rgw-zonegroup (or --rgw-region) parameter for radosgw.
Ceph orch apply rgw doesn’t seem to forward those parameters.

Once I added "--rgw-zone=zone21 --rgw-zonegroup=zg2 --rgw-region=zg2" to the command line in the unit.run file and restarted the service on that node, the process came up and seems to work correctly(?)

The behavior is somewhat unexpected to me:
Buckets created in any of my 3 zones and their contents get synced to all other zones, regardless of zonegroups.
Only if I create buckets/objects in a zone of one zonegroup I can't delete them in another zonegroup. I get an 301 without redirect URI.

Not sure, if that's a bug or a feature.

Ciao, Uli

> On 01. 02 2022, at 18:09, Ulrich Klein <ulrich.klein@xxxxxxxxxxxxxx> wrote:
> 
> Hi,
> 
> Maybe someone who knows the commands can help me with my problem ....
> 
> I have a small 6-node cluster running with 16.2.6 using cephadm and another one with the same versions.
> Both clusters are exlusively used for RGW/S3.
> I have a realm myrealm, a zonegroup zg1 and zones on both cluster, one is zone1, master, the other zone2
> I have RGWs running in both clusters behind HAproxy deployed via ceph orch and labels and they work just fine.
> 
> Now I want to add a second zonegroup zg2 on the first cluster, with a zone zone21, master, and no secondary zone.
> And, to use that zone I want to deploy another RGW for it.
> 
> So, I did mostly the same as for the first zonegroup (nceph03 is a node in the first cluster, the label rgwnosync is set and the keys are correct):
> 
> radosgw-admin zonegroup create --rgw-zonegroup=zg2 --endpoints=http://nceph03.example.com:8080 --rgw-realm=myrealm
> radosgw-admin zone create --rgw-zonegroup=zg2 --rgw-zone=zone21 --master --endpoints=http://nceph03.example.com:8080
> 
> radosgw-admin zone modify --rgw-zonegroup=zg2 --rgw-zone=zone21 --access-key=ackey --secret=sekey
> 
> radosgw-admin zonegroup add --rgw-zonegroup=zg2 --rgw-zone=zone21
> 
> radosgw-admin period update --commit
> 
> Everything looks as expected (by me) up to this point. But then I try to add an RGW process for that zone:
> 
> ceph orch apply rgw zone21 --realm=myrealm --rgw-zonegroup=zg2 --zone=zone21 '--placement=label:rgwnosync count-per-host:1' --port=8080
> 
> The process comes up and dies spitting out the messages in the log:
> 
> ...  1 rgw main: Cannot find zone id=f9151746-09e9-4854-9159-9df35a3457bf (name=zone21), switching to local zonegroup configuration
> ...
> ...
> ... -1 rgw main: Cannot find zone id=f9151746-09e9-4854-9159-9df35a3457bf (name=zone21)
> ...
> ...
> ...  0 rgw main: ERROR: failed to start notify service ((22) Invalid argument
> ...
> ...
> ...  0 rgw main: ERROR: failed to init services (ret=(22) Invalid argument)
> ...
> ...
> ... -1 Couldn't init storage provider (RADOS)
> 
> Somehow it looks like radosgw is looking for that zone in the default master zonegroup zg1 and I can't figure out how to tell it to use zg2.
> 
> I've tried variations of the "ceph orch apply rgw" command with and w/o zonegroup, with and w/o realm=, and a whole lot more, but nothing seems to make any difference.
> Even tried putting stuff in various ceph.conf files (although I already know it's ignored)
> 
> Do I miss some step or command or setting or ...
> 
> Any help would be appreciated.
> 
> Ciao, Uli
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux