Re: Geo-replication with RBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

For of all, I have some questions about your setup:

* What are your requirements?
* Are the DCs far from each others?

If they are reasonably close to each others, you can setup a single
cluster, with replicas across both DCs and manage the RBD devices with
pacemaker.

Cheers.

--
Regards,
Sébastien Han.


On Mon, Feb 18, 2013 at 3:20 PM, Sławomir Skowron <szibis@xxxxxxxxx> wrote:
> Hi, Sorry for very late response, but i was sick.
>
> Our case is to make a failover rbd instance in another cluster. We are
> storing block device images, for some services like Database. We need
> to have a two clusters, synchronized, for a quick failover, if first
> cluster goes down, or for upgrade with restart, or many other cases.
>
> Volumes are in many sizes: 1-500GB
> external block device for kvm vm, like EBS.
>
> On Mon, Feb 18, 2013 at 3:07 PM, Sławomir Skowron <szibis@xxxxxxxxx> wrote:
>> Hi, Sorry for very late response, but i was sick.
>>
>> Our case is to make a failover rbd instance in another cluster. We are
>> storing block device images, for some services like Database. We need to
>> have a two clusters, synchronized, for a quick failover, if first cluster
>> goes down, or for upgrade with restart, or many other cases.
>>
>> Volumes are in many sizes: 1-500GB
>> external block device for kvm vm, like EBS.
>>
>>
>> On Fri, Feb 1, 2013 at 12:27 AM, Neil Levine <neil.levine@xxxxxxxxxxx>
>> wrote:
>>>
>>> Skowron,
>>>
>>> Can you go into a bit more detail on your specific use-case? What type
>>> of data are you storing in rbd (type, volume)?
>>>
>>> Neil
>>>
>>> On Wed, Jan 30, 2013 at 10:42 PM, Skowron Sławomir
>>> <slawomir.skowron@xxxxxxxxxxxx> wrote:
>>> > I make new thread, because i think it's a diffrent case.
>>> >
>>> > We have managed async geo-replication of s3 service, beetwen two ceph
>>> > clusters in two DC's, and to amazon s3 as third. All this via s3 API. I love
>>> > to see native RGW geo-replication with described features in another thread.
>>> >
>>> > There is another case. What about RBD replication ?? It's much more
>>> > complicated, and for disaster recovery much more important, just like in
>>> > enterprise storage arrays.
>>> > One cluster in two DC's, not solving problem, because we need security
>>> > in data consistency, and isolation.
>>> > Do you thinking about this case ??
>>> >
>>> > Regards
>>> > Slawomir Skowron--
>>> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> > the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
>>
>>
>> --
>> -----
>> Pozdrawiam
>>
>> Sławek "sZiBis" Skowron
>
>
>
> --
> -----
> Pozdrawiam
>
> Sławek "sZiBis" Skowron
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux