Re: RBD Mirror Proxy Support?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 14, 2019 at 11:09 AM Kenneth Van Alstyne
<kvanalstyne@xxxxxxxxxxxxxxx> wrote:
>
> In this case, I’m imagining Clusters A/B both having write access to a third “Cluster C”.  So A/B -> C rather than A -> C -> B / B -> C -> A / A -> B-> C.  I admit, in the event that I need to replicate back to either primary cluster, there may be challenges.

While this is possible, in addition to the failback question, you
would also need to use unique pool names in clusters A and B since on
cluster C you are currently prevented from adding more than a single
peer per pool.

> Thanks,
>
> --
> Kenneth Van Alstyne
> Systems Architect
> Knight Point Systems, LLC
> Service-Disabled Veteran-Owned Business
> 1775 Wiehle Avenue Suite 101 | Reston, VA 20190
> c: 228-547-8045 f: 571-266-3106
> www.knightpoint.com
> DHS EAGLE II Prime Contractor: FC1 SDVOSB Track
> GSA Schedule 70 SDVOSB: GS-35F-0646S
> GSA MOBIS Schedule: GS-10F-0404Y
> ISO 9001 / ISO 20000 / ISO 27001 / CMMI Level 3
>
> Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, copy, use, disclosure, or distribution is STRICTLY prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
>
> On Jan 14, 2019, at 9:50 AM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>
> On Mon, Jan 14, 2019 at 10:10 AM Kenneth Van Alstyne
> <kvanalstyne@xxxxxxxxxxxxxxx> wrote:
>
>
> Thanks for the reply Jason — I was actually thinking of emailing you directly, but thought it may be beneficial to keep the conversation to the list so that everyone can see the thread.   Can you think of a reason why one-way RBD mirroring would not work to a shared tertiary cluster?  I need to build out a test lab to see how that would work for us.
>
>
> I guess I don't understand what the tertiary cluster is doing? If the
> goal is to replicate from cluster A -> cluster B -> cluster C, that is
> not currently supported since (by design choice) we don't currently
> re-write the RBD image journal entries from the source cluster to the
> destination cluster but instead just directly apply the journal
> entries to the destination image (to save IOPS).
>
> Thanks,
>
> --
> Kenneth Van Alstyne
> Systems Architect
> Knight Point Systems, LLC
> Service-Disabled Veteran-Owned Business
> 1775 Wiehle Avenue Suite 101 | Reston, VA 20190
> c: 228-547-8045 f: 571-266-3106
> www.knightpoint.com
> DHS EAGLE II Prime Contractor: FC1 SDVOSB Track
> GSA Schedule 70 SDVOSB: GS-35F-0646S
> GSA MOBIS Schedule: GS-10F-0404Y
> ISO 9001 / ISO 20000 / ISO 27001 / CMMI Level 3
>
> Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, copy, use, disclosure, or distribution is STRICTLY prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
>
> On Jan 12, 2019, at 4:01 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>
> On Fri, Jan 11, 2019 at 2:09 PM Kenneth Van Alstyne
> <kvanalstyne@xxxxxxxxxxxxxxx> wrote:
>
>
> Hello all (and maybe this would be better suited for the ceph devel mailing list):
> I’d like to use RBD mirroring between two sites (to each other), but I have the following limitations:
> - The clusters use the same name (“ceph”)
>
>
> That's actually not an issue. The "ceph" name is used to locate
> configuration files for RBD mirroring (a la
> /etc/ceph/<cluster-name>.conf and
> /etc/ceph/<cluster-name>.client.<id>.keyring). You just need to map
> that cluster config file name to the remote cluster name in the RBD
> mirroring configuration. Additionally, starting with Nautilus, the
> configuration details for connecting to a remote cluster can now be
> stored in the monitor (via the rbd CLI and dashbaord), so there won't
> be any need to fiddle with configuration files for remote clusters
> anymore.
>
> - The clusters share IP address space on a private, non-routed storage network
>
>
> Unfortunately, that is an issue since the rbd-mirror daemon needs to
> be able to connect to both clusters. If the two clusters are at least
> on different subnets and your management servers can talk to each
> side, you might be able to run the rbd-mirror daemon there.
>
>
> There are management servers on each side that can talk to the respective storage networks, but the storage networks cannot talk directly to each other.  I recall reading, some years back, of possibly adding support for an RBD mirror proxy, which would potentially solve my issues.  Has anything been done in this regard?
>
>
> No, I haven't really seen much demand for such support so it's never
> bubbled up as a priority yet.
>
> If not, is my best bet perhaps a tertiary clusters that both can reach and do one-way replication to?
>
> Thanks,
>
> --
> Kenneth Van Alstyne
> Systems Architect
> Knight Point Systems, LLC
> Service-Disabled Veteran-Owned Business
> 1775 Wiehle Avenue Suite 101 | Reston, VA 20190
> c: 228-547-8045 f: 571-266-3106
> www.knightpoint.com
> DHS EAGLE II Prime Contractor: FC1 SDVOSB Track
> GSA Schedule 70 SDVOSB: GS-35F-0646S
> GSA MOBIS Schedule: GS-10F-0404Y
> ISO 9001 / ISO 20000 / ISO 27001 / CMMI Level 3
>
> Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, copy, use, disclosure, or distribution is STRICTLY prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> Jason
>
>
>
>
> --
> Jason
>
>


-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux