Re: RBD Mirroring with Journaling and Snapshot mechanism

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 2, 2024 at 2:56 AM V A Prabha <prabhav@xxxxxxx> wrote:
>
> Dear Eugen,
> We have a scenario of DC and DR replication, and planned to explore RBD
> mirroring with both Journaling and Snapshot mechanism.
> I have a 5 TB storage at Primary DC and 5 TB storage at DR site with 2 different
> ceph clusters configured.
>
> Please clarify the following queries
>
> 1. With One way mirroring, the failover works fine in both journaling and
> snapshot mechanism and we are able to promote the workload from DR site. How
> does Failback work? We wanted to move the contents from DR to DC but it fails.

You'd need a RBD mirror daemon running in the DC cluster to replicate
the changes from DR to DC as Eugen said earlier. I suggest setting up
two-way mirroring with a RBD mirror daemon in each cluster for easy
fail over/fail back. The RBD mirror daemon in the other cluster that's
not replicating changes can just be left running. It won't be doing
any active mirroring work.

> In journaling mechanism, it deletes the entire volume and recreates it afresh
> which does not solve our problem.

Not sure about this. What commands did you run here?

> 2. How does incremental replication work from DR to DC?

The RBD mirror daemon in the DC cluster would use the same incremental
replication mechanism as that of the mirror daemon in the DR cluster
that replicated images before the failover.

> 3. Does Two-way mirroring help this situation. According to me, in this method,
> it is for 2 different clouds with 2 different storages and replicating both the
> clouds workloads? Does Failback work in this scenario ?
> Please help us / guide us to deploy this solution

Yes, two-way mirroring for easy failover/failback. Also keep in mind
that journal based mirroring involves writing to the primary image's
journal and to the image itself. Snapshot based mirroring is being
actively enhanced and doesn't have 2X writes in the primary cluster.
You'd have to find out the mirroring snapshot schedule that works for
your setup.
Snapshot based mirroring for propagating discards to secondary [1] and
for replicating clones [2] are being worked on.

Hope this helps.

Best,
Ramana

[1] https://tracker.ceph.com/issues/58852
[2] https://tracker.ceph.com/issues/61891


>
> Regards
> V.A.Prabha
>
>
> Thanks & Regards,
> Ms V A Prabha / श्रीमती प्रभा वी ए
> Joint Director / संयुक्त निदेशक
> Centre for Development of Advanced Computing(C-DAC) / प्रगत संगणन विकास
> केन्द्र(सी-डैक)
> Tidel Park”, 8th Floor, “D” Block, (North &South) / “टाइडल पार्क”,8वीं मंजिल,
> “डी” ब्लॉक, (उत्तर और दक्षिण)
> No.4, Rajiv Gandhi Salai / नं.4, राजीव गांधी सलाई
> Taramani / तारामणि
> Chennai / चेन्नई – 600113
> Ph.No.:044-22542226/27
> Fax No.: 044-22542294
> ------------------------------------------------------------------------------------------------------------
> [ C-DAC is on Social-Media too. Kindly follow us at:
> Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]
>
> This e-mail is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. If you are not the
> intended recipient, please contact the sender by reply e-mail and destroy
> all copies and the original message. Any unauthorized review, use,
> disclosure, dissemination, forwarding, printing or copying of this email
> is strictly prohibited and appropriate legal action will be taken.
> ------------------------------------------------------------------------------------------------------------
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux