Re: ceph on two data centers far away

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What is the use case that requires you to have it in two datacenters?
In addition to RBD mirroring already mentioned by others, you can do
RBD snapshots and ship those snapshots to a remote location (separate
cluster or separate pool). Similar to RBD mirroring, in this situation
your client writes are not subject to that latency.

On Thu, Oct 20, 2016 at 1:51 PM, German Anders <ganders@xxxxxxxxxxxx> wrote:
> Thanks, that's too far actually lol. And how things going with rbd
> mirroring?
>
> German
>
> 2016-10-20 14:49 GMT-03:00 yan cui <ccuiyyan@xxxxxxxxx>:
>>
>> The two data centers are actually cross US.  One is in the west, and the
>> other in the east.
>> We try to sync rdb images using RDB mirroring.
>>
>> 2016-10-20 9:54 GMT-07:00 German Anders <ganders@xxxxxxxxxxxx>:
>>>
>>> from curiosity I wanted to ask you what kind of network topology are you
>>> trying to use across the cluster? In this type of scenario you really need
>>> an ultra low latency network, how far from each other?
>>>
>>> Best,
>>>
>>> German
>>>
>>> 2016-10-18 16:22 GMT-03:00 Sean Redmond <sean.redmond1@xxxxxxxxx>:
>>>>
>>>> Maybe this would be an option for you:
>>>>
>>>> http://docs.ceph.com/docs/jewel/rbd/rbd-mirroring/
>>>>
>>>>
>>>> On Tue, Oct 18, 2016 at 8:18 PM, yan cui <ccuiyyan@xxxxxxxxx> wrote:
>>>>>
>>>>> Hi Guys,
>>>>>
>>>>>    Our company has a use case which needs the support of Ceph across
>>>>> two data centers (one data center is far away from the other). The
>>>>> experience of using one data center is good. We did some benchmarking on two
>>>>> data centers, and the performance is bad because of the synchronization
>>>>> feature in Ceph and large latency between data centers. So, are there
>>>>> setting ups like data center aware features in Ceph, so that we have good
>>>>> locality? Usually, we use rbd to create volume and snapshot. But we want the
>>>>> volume is high available with acceptable performance in case one data center
>>>>> is down. Our current setting ups does not consider data center difference.
>>>>> Any ideas?
>>>>>
>>>>>
>>>>> Thanks, Yan
>>>>>
>>>>> --
>>>>> Think big; Dream impossible; Make it happen.
>>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
>>
>>
>>
>> --
>> Think big; Dream impossible; Make it happen.
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Respectfully,

Wes Dillingham
wes_dillingham@xxxxxxxxxxx
Research Computing | Infrastructure Engineer
Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 210
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux