Re: How to run multiple RadosGW instances under the same zone

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It works. Thank you for your time (Srinivas and Ben).

Supplement:
client.radosgw.gateway-1 and
client.radosgw.gateway-2 should only share the same ceph pools.
A keyring must be created for
both client.radosgw.gateway-1 and client.radosgw.gateway-2.

thx

joseph


On 01/05/2016 01:26 PM, Srinivasula Maram wrote:

Yes, it should work. Even if you have multiple radosgws/instances, all instances use same pools.

 

Ceph.conf:

[client.radosgw.gateway-1]

host = host1

keyring = /etc/ceph/ceph.client.admin.keyring

rgw_socket_path = /var/log/ceph/radosgw1.sock

log_file = /var/log/ceph/radosgw-1.host1.log

rgw_max_chunk_size = 4194304

rgw_frontends = "civetweb port=8081"

rgw_dns_name = host1

rgw_ops_log_rados = false

rgw_enable_ops_log = false

rgw_cache_lru_size = 1000000

rgw_enable_usage_log = false

rgw_usage_log_tick_interval = 30

rgw_usage_log_flush_threshold = 1024

rgw_exit_timeout_secs = 600

 

[client.radosgw.gateway-2]

host = host2

keyring = /etc/ceph/ceph.client.admin.keyring

rgw_socket_path = /var/log/ceph/radosgw2.sock

log_file = /var/log/ceph/radosgw-2.host2.log

rgw_max_chunk_size = 4194304

rgw_frontends = "civetweb port=8082"

rgw_dns_name = host2

rgw_ops_log_rados = false

rgw_enable_ops_log = false

rgw_cache_lru_size = 1000000

rgw_enable_usage_log = false

rgw_usage_log_tick_interval = 30

rgw_usage_log_flush_threshold = 1024

rgw_exit_timeout_secs = 600

Thanks,

Srinivas

 

From: Ben Hines [mailto:bhines@xxxxxxxxx]
Sent: Tuesday, January 05, 2016 10:07 AM
To: Yang Honggang
Cc: Srinivasula Maram; ceph-users@xxxxxxxx; Javen Wu
Subject: Re: How to run multiple RadosGW instances under the same zone

 

It works fine. The federated config reference is not related to running multiple instances on the same zone.

 

Just set up 2 radosgws give each instance the exact same configuration. (I use different client names in ceph.conf, but i bet it would work even if the client names were identical)

 

Official documentation on this very common use case would be a good idea, i also figured this out on my own.

 

On Mon, Jan 4, 2016 at 6:21 PM, Yang Honggang <joseph.yang@xxxxxxxxxxxx> wrote:

Hello Srinivas,

Yes, we can use Haproxy as a frontend. But the precondition is multi RadosGW instances sharing
the SAME CEPH POOLS are running. I only want the master zone keep one copy of all data. I want
to access the data through ANY radosgw instance.
And it said in http://docs.ceph.com/docs/master/radosgw/federated-config/
"zones may have more than one Ceph Object Gateway instance per zone.". So I need the official way
to set up these radosgw instances.

thx


joseph

 

On 01/04/2016 06:37 PM, Srinivasula Maram wrote:

Hi Joseph,

 

You can try haproxy as proxy for load balancing and failover.

 

Thanks,

Srinivas

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Joseph Yang
Sent: Monday, January 04, 2016 2:09 PM
To: ceph-users@xxxxxxxx; Joseph Yang
Subject: How to run multiple RadosGW instances under the same zone

 

 

Hello,
 
How to run multiple RadosGW instances under the same zone?
 
Assume there are two hosts HOST_1 and HOST2. I want to run
two RadosGW instances on these two hosts for my zone ZONE_MULI.
So, when one of the radosgw instance is down, I can still access the zone.
 
There are some questions:
1. How many ceph users should I create?
2. How many rados users should I create?
3. How to set ZONE_MULI's access_key/secret_key?
4. How to set the 'host' section in the ceph conf file for these two 
   radosgw instances?
5. How to start the instances?
    # radosgw --cluster My_Cluster -n ?_which_rados_user_?
 
I read http://docs.ceph.com/docs/master/radosgw/federated-config/, but
there seems no explanation.
 
Your answer is appreciated!
 
thx
 
Joseph

 

 

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux