Re: RBD Mirroring with Journaling and Snapshot mechanism

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 7, 2024 at 7:54 AM Eugen Block <eblock@xxxxxx> wrote:
>
> Hi,
>
> I'm not the biggest rbd-mirror expert.
> As understand it, if you use one-way mirroring you can failover to the
> remote site, continue to work there but there's no failover back to
> primary site. You would need to stop client IO on DR, demote the image
> and then import the remote images back to primary site. Once
> everything is good you can promote the image on primary again. The
> rbd-mirror will then most likely be in a split-brain situation, which
> can be resolved by resyncing images from primary again. You can't do a
> resync on primary site because there's no rbd-mirror daemon running.
>
> Having two-way mirroring could help, I believe. Let's say you lose the
> primary site, you can (force) promote images on the remote site,
> continue working. Once the primary site is back up (but not primary
> yet), you can do the image resync from the remote (currently primary)
> site (because there's a rbd-mirror daemon running on the primary site
> as well). Once the primary site has all images promoted, you'll
> probably have to resync on the remote site again to get out of the
> split-brain.

Also, you need to demote the out-of-date images in the cluster that
came back, before issuing resync on them.
This is to resolve the split-brain.
See, https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#force-image-resync

-Ramana

> But at least you won't need to export/import images.
>
> But you'll need to test this properly to find out if your requirements
> are met.
>
> Regards,
> Eugen
>
>
> Zitat von V A Prabha <prabhav@xxxxxxx>:
>
> > Dear Eugen,
> > We have a scenario of DC and DR replication, and planned to explore RBD
> > mirroring with both Journaling and Snapshot mechanism.
> > I have a 5 TB storage at Primary DC and 5 TB storage at DR site with
> > 2 different
> > ceph clusters configured.
> >
> > Please clarify the following queries
> >
> > 1. With One way mirroring, the failover works fine in both journaling and
> > snapshot mechanism and we are able to promote the workload from DR site. How
> > does Failback work? We wanted to move the contents from DR to DC but
> > it fails.
> > In journaling mechanism, it deletes the entire volume and recreates it afresh
> > which does not solve our problem.
> > 2. How does incremental replication work from DR to DC?
> > 3. Does Two-way mirroring help this situation. According to me, in
> > this method,
> > it is for 2 different clouds with 2 different storages and
> > replicating both the
> > clouds workloads? Does Failback work in this scenario ?
> > Please help us / guide us to deploy this solution
> >
> > Regards
> > V.A.Prabha
> >
> >
> > Thanks & Regards,
> > Ms V A Prabha / श्रीमती प्रभा वी ए
> > Joint Director / संयुक्त निदेशक
> > Centre for Development of Advanced Computing(C-DAC) / प्रगत संगणन विकास
> > केन्द्र(सी-डैक)
> > Tidel Park”, 8th Floor, “D” Block, (North &South) / “टाइडल पार्क”,8वीं मंजिल,
> > “डी” ब्लॉक, (उत्तर और दक्षिण)
> > No.4, Rajiv Gandhi Salai / नं.4, राजीव गांधी सलाई
> > Taramani / तारामणि
> > Chennai / चेन्नई – 600113
> > Ph.No.:044-22542226/27
> > Fax No.: 044-22542294
> > ------------------------------------------------------------------------------------------------------------
> > [ C-DAC is on Social-Media too. Kindly follow us at:
> > Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]
> >
> > This e-mail is for the sole use of the intended recipient(s) and may
> > contain confidential and privileged information. If you are not the
> > intended recipient, please contact the sender by reply e-mail and destroy
> > all copies and the original message. Any unauthorized review, use,
> > disclosure, dissemination, forwarding, printing or copying of this email
> > is strictly prohibited and appropriate legal action will be taken.
> > ------------------------------------------------------------------------------------------------------------
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux