Re: RBD mirroring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, disaster recovery can be solved by application layer, but I think it would be nice openstack feature too. Specially when the replication is already solved by Ceph. I'll ask on other forums if something is doing on that feature. Thanks again for pointing me to the right direction.
Kemo

On Fri, Jan 6, 2017 at 2:59 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
In all honesty, this unfortunately isn't one of my areas of expertise.

OpenStack has such a large umbrella and is moving too quickly for me
to stay 100% up-to-date. I don't think, from a cloud point-of-view,
this is a problem too many purists are concerned about since a
cloud-native app should be able to survive failures -- and any
necessary replication of data should be handled by the
application-layer instead of the infrastructure.  Also -- it's
definitely a hard problem to solve in a generic fashion.

On Fri, Jan 6, 2017 at 8:27 AM, Klemen Pogacnik <klemen@xxxxxxxxxx> wrote:
> That I was afraid of. So there isn't any commands available and I must
> somehow synchronize Cinder DB, to get access to volumes also on second site.
> Do you maybe know, if somebody is already thinking or even working on that?
> On presentation Kingbird project was mentioned, but I'm not sure if their
> work will solve this problem.
> Kemo
>
> On Thu, Jan 5, 2017 at 4:45 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>>
>> On Thu, Jan 5, 2017 at 7:24 AM, Klemen Pogacnik <klemen@xxxxxxxxxx> wrote:
>> > I'm playing with rbd mirroring with openstack. The final idea is to use
>> > it
>> > for disaster recovery of DB server running on Openstack cluster, but
>> > would
>> > like to test this functionality first.
>> > I've prepared this configuration:
>> > - 2 openstack clusters (devstacks)
>> > - 2 ceph clusters (one node clusters)
>> > Remote Ceph is used as a backend for Cinder service. Each devstack has
>> > its
>> > own Ceph cluster. Mirroring was enabled for volumes pool, and rbd-mirror
>> > daemon was started.
>> > When I create new cinder volume on devstack1, the same rbd storage
>> > appeared
>> > on both Ceph clusters, so it seems, mirroring is working.
>> > Now I would like to see this storage as a Cinder volume on devstack2
>> > too. Is
>> > it somehow possible to do that?
>>
>> This level is HA/DR is not currently built-in to OpenStack (it's
>> outside the scope of Ceph). There are several strategies you could use
>> to try to replicate the devstack1 database to devstack2. Here is a
>> presentation from OpenStack Summit Austin [1] re: this subject.
>>
>> > The next question is, how to make switchover. On Ceph it can easily be
>> > done
>> > by demote and promote commands, but then the volumes are still not seen
>> > on
>> > Devstack2, so I can't use it.
>> > On open stack there is cinder failover-host command, which is, as I can
>> > understand, only useful for configuration with one openstack and two
>> > ceph
>> > clusters. Any idea how to make switchover with my configuration.
>> > Thanks a lot for help!
>>
>> Correct -- Cinder's built-in volume replication feature is just a set
>> of hooks available for backends that already support
>> replication/mirroring. The hooks for Ceph RBD are scheduled to be
>> included in the next release of OpenStack, but as you have discovered,
>> it really only protects against a storage failure (where you can
>> switch from Ceph cluster A to Ceph cluster B), but does not help with
>> losing your OpenStack data center.
>>
>> > Kemo
>> >
>> >
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>> [1]
>> https://www.openstack.org/videos/video/protecting-the-galaxy-multi-region-disaster-recovery-with-openstack-and-ceph
>>
>> --
>> Jason
>
>



--
Jason

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux