Re: RBD mirroring, asking for clarification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thanks
I am trying to find out what is the best way to synchronize VMS between two
HCI Proxmox clusters.
Each cluster will contain 3 compute/storage nodes and each node will
contain 4 nvme osd disks.

There will be a 10gbs link between the two platforms.

The idea is to be able to sync VMS between the two platforms in case of
disaster bring the synced VMS up.

Would you recommend to create a dedicated pool in each platform to
synchronization?

Regards.

On Tue, May 2, 2023, 13:30 Eugen Block <eblock@xxxxxx> wrote:

> Hi,
>
> while your assumptions are correct (you can use the rest of the pool
> for other non-mirrored images), at least I'm not aware of any
> limitations, can I ask for the motivation behind this question? Mixing
> different use-cases doesn't seem like a good idea to me. There's
> always a chance that a client with caps for that pool deletes or
> modifies images or even the entire pool. Why not simply create a
> different pool and separate those clients?
>
> Thanks,
> Eugen
>
> Zitat von wodel youchi <wodel.youchi@xxxxxxxxx>:
>
> > Hi,
> >
> > When using rbd mirroring, the mirroring concerns the images only, not the
> > whole pool? So, we don't need to have a dedicated pool in the destination
> > site to be mirrored, the only obligation is that the mirrored pools must
> > have the same name.
> >
> > In other words, We create two pools with the same name, one on the source
> > site the other on the destination site, we create the mirror link (one
> way
> > or two ways replication), then we choose what images to sync.
> >
> > Both pools can be used simultaneously on both sites, it's the mirrored
> > images that cannot be used simultaneously, only promoted ones.
> >
> > Is this correct?
> >
> > Regards.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux