RE: distributed cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is also something I'm very interested in as well from a Power outage or some other Data centre issue.

I assume the main issue here would be our friend latency however there is a bloke on the mailing list who is currently running a 2 site cluster setup as well.

I've been thinking about a setup with 2 replica level (1 replica per site) with the sites only 2-3km apart latency shouldn't be much of an issue but the obvious bottleneck will be the 10gbe link between sites and split brain isn't an issue if the RBD Vol is only mounted at a single site anyway.

If the data is sitting on a BTRFS/ZFS raid (or raid6 until BTRFS is ready) this would be reasonable level of risk. As for data integrity/availability of only having 2 replicas because the likely hood of having a complete server failure and a link outage at the same time would be fairly minimal.

Regards,
Quenten 


-----Original Message-----
From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Jimmy Tang
Sent: Monday, 28 May 2012 11:48 PM
To: Jerker Nyberg
Cc: ceph-devel@xxxxxxxxxxxxxxx
Subject: Re: distributed cluster

Hi All,

On 28 May 2012, at 12:28, Jerker Nyberg wrote:

> 
> This may not really be a subject ceph-devel mailinglist but rather a potential ceph-users? I hope it is ok to write here. I would like to discuss the if it sounds reasonable to run a Ceph cluster distributed over a metro (city) network.
> 
> Let us assume we have a couple of sites distributed over a metro network with at least gigabit interconnect. The demands for storage capacity and speed at our sites are increasing together with the demands for reasonably stable storage. May Ceph be a port of a solution?
> 
> One idea is to set up Ceph distributed over this metro network. A public service network is announced at all sites, anycasted from the storage SMB/NFS/RGW(?)-to-Ceph gateway. (for stateless connections). Statefull connections (iSCSI?) has to contact the individual storage gateways and redundancy is handled at the application level (dual path). Ceph kernel clients contact the storage servers directly.
> 
> Hopefully this means that clients at the sites with a storage gateway will contact it. Clients at a site without a local storage gateway, or when the local gateway is down, will contact a storage gateway at another site.
> 
> Hopefully not all power and network at the whole city will go down at once!
> 
> Does this sound reasonable? It should be easy to scale up with more storage nodes with Ceph. Or is it better to put all servers in the same server room?
> 
>                        Internet
>                         |   |
>                        Routers
>                         |   |
>   Metro network  =============================
>                  |     |     |     |    |    |
>   Sites          R     R     R     R    R    R
>                  |     |     |     |
>   Servers      Ceph1 Ceph2 Ceph3 Ceph4
> 
> 


I'm also interested in this type of use case, I would be interested in running a ceph cluster across a metropolitan area network. Has anyone tried running ceph in a WAN/MAN environment across a city/state/country?

Regards,
Jimmy Tang

--
Senior Software Engineer, Digital Repository of Ireland (DRI)
Trinity Centre for High Performance Computing,
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux