Re: can run more than one rgw multisite realm on one ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/19/19 5:44 AM, tdados@xxxxxxxxxxx wrote:
Hello,
I managed to do that 3 months ago with 2 realms as i wanted to connect 2 different openstack environments (object store) and use different zones on the same ceph cluster.
Now unfortunately i am not able to recreate the scenario :( as the period are getting mixed or i am doing something wrong.

Basically i had the below output from:
 From my lab:
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
.rgw.root               8.2 KiB      32      0     96                  0       0        0    326 252 KiB     84  56 KiB        0 B         0 B
cinder-volumes          7.6 GiB    2001      0   6003                  0       0        0  50706  42 MiB      0     0 B        0 B         0 B
cinder-volumes-fast       774 B       6      0     18                  0       0        0   6864 5.5 MiB      0     0 B        0 B         0 B
device_health_metrics   4.3 KiB       1      0      3                  0       0        0      4   4 KiB      4   4 KiB        0 B         0 B
glance-images           1.2 GiB     171      0    513                  0       0        0   4769 361 MiB    800 897 MiB        0 B         0 B
test                        0 B       0      0      0                  0       0        0      0     0 B      0     0 B        0 B         0 B
old-dev.rgw.buckets.data  106 MiB      27      0     81                  0       0        0      0     0 B     35 106 MiB        0 B         0 B
old-dev.rgw.buckets.index     0 B       1      0      3                  0       0        0     36  36 KiB      4   2 KiB        0 B         0 B
old-dev.rgw.control           0 B       8      0     24                  0       0        0      0     0 B      0     0 B        0 B         0 B
old-dev.rgw.log              50 B     177      0    531                  0       0        0   1057 880 KiB    703   1 KiB        0 B         0 B
old-dev.rgw.meta            763 B       4      0     12                  0       0        0      3   3 KiB     11   5 KiB        0 B         0 B
new-dev.rgw.buckets.data  280 MiB      70      0    210                  0       0        0      0     0 B     78 280 MiB        0 B         0 B
new-dev.rgw.buckets.index     0 B       1      0      3                  0       0        0     34  34 KiB      4   2 KiB        0 B         0 B
new-dev.rgw.control           0 B       8      0     24                  0       0        0      0     0 B      0     0 B        0 B         0 B
new-dev.rgw.log              50 B     177      0    531                  0       0        0    526 350 KiB    353   1 KiB        0 B         0 B
new-dev.rgw.meta            964 B       4      0     12                  0       0        0      1   1 KiB      9   4 KiB        0 B         0 B

 From my notes i had something like that

radosgw-admin realm create --rgw-realm=old-dev --default --id rgw.radosgw-srv-2
radosgw-admin zonegroup create --rgw-zonegroup=old-dev --endpoints=http://192.168.58.51:7480 --rgw-realm=old-dev --master --default  --id rgw.radosgw-srv-2
radosgw-admin zone create --rgw-zonegroup=old-dev --rgw-zone=old-dev  --endpoints=http://192.168.58.51:7480 --rgw-realm=old-dev --master --default --id rgw.radosgw-srv-2

radosgw-admin realm create --rgw-realm=new-dev --default --id rgw.radosgw-srv
radosgw-admin zonegroup create --rgw-zonegroup=new-dev --endpoints=http://192.168.58.50:7480 --rgw-realm=new-dev --master --default  --id rgw.radosgw-srv
radosgw-admin zone create --rgw-zonegroup=new-dev  --rgw-zone=new-dev --master --default --endpoints=http://192.168.58.50:7480 --rgw-realm=new-dev  --id rgw.radosgw-srv

but then you have to
radosgw-admin period update --commit but i not sure what's the order to be honest and from which node to do that specifically cause i setup the first one.. the second one gets issues. It has to do with the periods but i haven't managed to break it down.

note that only one of the realms in the cluster can be --default at a time, so you'll want to specify the --rgw-realm= in each radosgw-admin command. each realm has its own period, so you need to 'period update --commit --realm-id=...' for both realms


Ofcourse you put specify the zone in the ceph config in each radosgw to make it clear.. But restarting the radosgw doesn't really work for the 2nd radosgw server.. Still working on that but thought i should give you some hints.

You might need to delete the old pools as well:

radosgw-admin zonegroup remove --rgw-zonegroup=default --rgw-zone=default --id rgw.radosgw-srv-2
radosgw-admin zone delete --rgw-zone=default --id rgw.radosgw-srv-2
radosgw-admin zonegroup delete --rgw-zonegroup=default --id rgw.radosgw-srv-2
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux