Re: Mirroring data between pools on the same cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 16, 2017 at 3:24 PM, Adam Carheden <carheden@xxxxxxxx> wrote:
> On Thu, Mar 16, 2017 at 11:55 AM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>> On Thu, Mar 16, 2017 at 1:02 PM, Adam Carheden <carheden@xxxxxxxx> wrote:
>>> Ceph can mirror data between clusters
>>> (http://docs.ceph.com/docs/master/rbd/rbd-mirroring/), but can it
>>> mirror data between pools in the same cluster?
>>
>> Unfortunately, that's a negative. The rbd-mirror daemon currently
>> assumes that the local and remote pool names are the same. Therefore,
>> you cannot mirror images between a pool named "X" and a pool named
>> "Y".
> I figured as much from the command syntax. Am I going about this all
> wrong? There have got to be lots of orgs with two room that back each
> other up. How do others solve that problem?

Not sure. This is definitely the first time I've heard this as an
example for RBD mirroring. However, it's a relatively new feature and
we expect the more people that use it, the more interesting scenarios
we will learn about. .

> How about a single 10Gb fiber link (which is, unfortunately, used for
> everything, not just CEPH)? Any advice on estimating if/when latency
> over a single link will become a problem?

A quick end-to-end performance test would probably quickly answer that
question from a TCP/IP perspective. Ceph IO latencies will a
combination of the network latency (client to primary PG and primary
PG to secondary PGs replication), disk IO latency, and Ceph software
latency.

>> At the current time, I think three separate clusters would be the only
>> thing that could satisfy all use-case requirements. While I have never
>> attempted this, I would think that you should be able to run two
>> clusters on the same node (e.g. the HA cluster gets one OSD per node
>> in both rooms and the roomX cluster gets the remainder of OSDs in each
>> node in its respective room).
>
> Great idea. I guess that could be done either by munging some port
> numbers and non-default config file locations or by running CEPH OSDs
> and monitors on VMs. Any compelling reason for one way over the other?

Containerized Ceph is another alternative and is gaining interest. If
you use VMs, you will take a slight performance hit from the
virtualization but the existing deployment tools will work w/o
modification. As an alternative, use the existing deployment tools to
deploy the two "room" clusters and then just manually create the few
extra OSDs and MONs for the HA cluster.

> --
> Adam Carheden
> Systems Administrator
> UCAR/NCAR/RAL
> x2753

-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux