Multi region RGW Config Questions - Quincy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,



I have a Qunicy (17.2.6) cluster, looking to create a multi-zone /
multi-region RGW service and have a few questions with respect to published
docs - https://docs.ceph.com/en/quincy/radosgw/multisite/.



In general, I understand the process as:



1.   Create a new REALM, ZONEGROUP, ZONE:

radosgw-admin realm create --rgw-realm=my_new_realm [--default]


radosgw-admin zonegroup create --rgw-zonegroup=my_country
--endpoints=http://rgw1:80 --rgw-realm=my_new_realm --master –default



radosgw-admin zone create --rgw-zonegroup=my_country --rgw-zone=my-region *\*

                            --master --default *\*

                            --endpoints={http://fqdn}[,{http://fqdn}]





## Question:

If I have multiple RGWs deployed on my cluster, do I specify all of
them as individual endpoints?  OR specifying one rgw automatically
propagates config throughout all?





2.  Create SYSTEM user



radosgw-admin user create --uid="synchronization-user"
--display-name="Synchronization User" --system

radosgw-admin zone modify --rgw-zone={zone-name} --access-key={access-key}
--secret={secret}

radosgw-admin period update --commit





## Question:

The SYSTEM user is used only for replication?   Will creating new
REALM, ZONGROUP, ZONE reset any administrative access to management of
RGWs through ceph-dashboard?



3. Remove DEFAULT REALM, ZONEGROUP, ZONE and supporting pools

radosgw-admin zonegroup delete --rgw-zonegroup=default --rgw-zone=default

radosgw-admin period update --commit

radosgw-admin zone delete --rgw-zone=default

radosgw-admin period update --commit

radosgw-admin zonegroup delete --rgw-zonegroup=default

radosgw-admin period update --commit



ceph osd pool rm default.rgw.control default.rgw.control
--yes-i-really-really-mean-it

ceph osd pool rm default.rgw.data.root default.rgw.data.root
--yes-i-really-really-mean-it

ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it

ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it

ceph osd pool rm default.rgw.users.uid default.rgw.users.uid
--yes-i-really-really-mean-it



4. UPDATING CEPH CONFIG FILE / RGW CONFIG VIA CEPH ORCH



# QUESTION:

Since I’m using ceph orch would I simply set rgw_zone property via
CLUSTER -> CONFIGURATION on ceph-dashboard?


Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux