Re: Geo-replication with RADOS GW

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Jan 28, 2013, at 11:32 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:

> On Monday, January 28, 2013 at 9:54 AM, Ben Rowland wrote:
>> Hi,
>> 
>> I'm considering using Ceph to create a cluster across several data
>> centres, with the strict requirement that writes should go to both
>> DCs. This seems possible by specifying rules in the CRUSH map, with
>> an understood latency hit resulting from purely synchronous writes.
>> 
>> The part I'm unsure about is how the RADOS GW fits into this picture.
>> For high availability (and to improve best-case latency on reads),
>> we'd want to run a gateway in each data centre. However, the first
>> paragraph of the following post suggests this is not possible:
>> 
>> http://article.gmane.org/gmane.comp.file-systems.ceph.devel/12238
>> 
>> Is there a hard restriction on how many radosgw instances can run
>> across the cluster, or is the point of the above post more about a
>> performance hit?
> 
> It's talking about the performance hit. Most people can't afford data-center level connectivity between two different buildings. ;) If you did have a Ceph cluster split across two DC (with the bandwidth to support them) this will work fine. There aren't any strict limits on the number of gateways you stick on a cluster, just the scaling costs associated with cache invalidation notifications.
> 
> 
>> It seems to me it should be possible to run more
>> than one radosgw, particularly if each instance communicates with a
>> local OSD which can proxy reads/writes to the primary (which may or
>> may not be DC-local).
> 
> They aren't going to do this, though — each gateway will communicate with the primaries directly.

I don't know what the timeline is, but Yehuda proposed recently the idea of master and slave "zones" (subsets of a cluster) and other changes to facilitate "rgw geo-replication and disaster recovery". See this message:
	http://article.gmane.org/gmane.comp.file-systems.ceph.devel/12238

If/when that comes to fruition it would open a lot of possibilities for the kind of scenario you're talking about. (Yes, I'm looking forward to it. :) )

JN

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux