Hi,
do you really need multi-site since you mentioned that you have one
cluster? Maybe start with single-site RGW [1] since there's no
replication target anyway.
If you deploy multiple rgw daemons you might need an ingress service
[2] as well and point your zone endpoints to your virtual IP. The
ingress service deploys a haproxy and a keepalive daemon on each node,
one of them has the virtual IP assigned. I just added an ingress
service to my existing rgws in my lab and it seems to work, although I
don't have any clients connected to the cluster. But the logs don't
contain any errors so I think I'm good.
Regards,
Eugen
[1] https://docs.ceph.com/en/latest/cephadm/services/rgw/#deploy-rgws
[2]
https://docs.ceph.com/en/latest/cephadm/services/rgw/#high-availability-service-for-rgw
Zitat von Michael Worsham <mworsham@xxxxxxxxxxxxxxxxxx>:
I have setup a 'reef' Ceph Cluster using Cephadm and Ansible in a
VMware ESXi 7 / Ubuntu 22.04 lab environment per the how-to guide
provided here:
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/.
The installation steps were fairly easy and I was able to get the
environment up and running in about 15 minutes under VMware ESXi 7.
I have buckets and pools already setup. However, the ceph.io site is
confusing on how to setup the Rados Gateway (radosgw) with
Multi-site -- https://docs.ceph.com/en/latest/radosgw/multisite/. Is
a copy of HAProxy also needed for handling the front-end load
balancing or is it implied that Ceph sets it up?
Command-line scripting I was planning on using for setting up the RGW:
```
radosgw-admin realm create --rgw-realm=sandbox --default
radosgw-admin zonegroup create --rgw-zonegroup=sandbox --master --default
radosgw-admin zone create --rgw-zonegroup=sandbox --rgw-zone=sandbox
--master --default
radosgw-admin period update --rgw-realm=sandbox --commit
ceph orch apply rgw sandbox --realm=sandbox --zone=sandbox
--placement="2 ceph-mon1 ceph-mon2" --port=8000
```
What other steps are needed to get the RGW up and running so that it
can be presented to something like Veeam for doing performance and
I/O testing concepts?
-- Michael
This message and its attachments are from Data Dimensions and are
intended only for the use of the individual or entity to which it is
addressed, and may contain information that is privileged,
confidential, and exempt from disclosure under applicable law. If
the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering the message to the
intended recipient, you are hereby notified that any dissemination,
distribution, or copying of this communication is strictly
prohibited. If you have received this communication in error, please
notify the sender immediately and permanently delete the original
email and destroy any copies or printouts of this email as well as
any attachments.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx