Re: RBD mirroring, asking for clarification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

the question is if both sites are used as primary clusters from different clients or if it's for disaster recovery only (site1 fails, make site2 primary). If both clusters are used independently with different clients I would prefer to separate the pools, so this option:

PoolA (site1)  -----> PoolA (site2)
PoolB (site1) <-----  PoolB (site2)

That means for images in poolA site1 is the primary site while site2 is the backup site. And for images in poolB site2 is the primary site.

Zitat von wodel youchi <wodel.youchi@xxxxxxxxx>:

Hi,

The goal is to sync some VMs from site1 - to - site2 and vice-versa sync
some VMs in the other way.
I am thinking of using rdb mirroring for that. But I have little experience
with Ceph management.

I am searching for the best way to do that.

I could create two pools on each site, and cross sync the pools.
PoolA (site1)  -----> PoolA (site2)
PoolB (site1) <-----  PoolB (site2)

Or create one pool on each site and cross sync the VMs I need.
PoolA (site1) <-----> PoolA (site2)


The first option seems to be the safest and the easiest to manage.

Regards.

<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Virus-free.www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Le mer. 3 mai 2023 à 08:21, Eugen Block <eblock@xxxxxx> a écrit :

Hi,

just to clarify, you mean in addition to the rbd mirroring you want to
have another sync of different VMs between those clusters (potentially
within the same pools) or are you looking for one option only? Please
clarify. Anyway, I would use dedicated pools for rbd mirroring and
then add more pools for different use-cases.

Regards,
Eugen

Zitat von wodel youchi <wodel.youchi@xxxxxxxxx>:

> Hi,
>
> Thanks
> I am trying to find out what is the best way to synchronize VMS between
two
> HCI Proxmox clusters.
> Each cluster will contain 3 compute/storage nodes and each node will
> contain 4 nvme osd disks.
>
> There will be a 10gbs link between the two platforms.
>
> The idea is to be able to sync VMS between the two platforms in case of
> disaster bring the synced VMS up.
>
> Would you recommend to create a dedicated pool in each platform to
> synchronization?
>
> Regards.
>
> On Tue, May 2, 2023, 13:30 Eugen Block <eblock@xxxxxx> wrote:
>
>> Hi,
>>
>> while your assumptions are correct (you can use the rest of the pool
>> for other non-mirrored images), at least I'm not aware of any
>> limitations, can I ask for the motivation behind this question? Mixing
>> different use-cases doesn't seem like a good idea to me. There's
>> always a chance that a client with caps for that pool deletes or
>> modifies images or even the entire pool. Why not simply create a
>> different pool and separate those clients?
>>
>> Thanks,
>> Eugen
>>
>> Zitat von wodel youchi <wodel.youchi@xxxxxxxxx>:
>>
>> > Hi,
>> >
>> > When using rbd mirroring, the mirroring concerns the images only, not
the
>> > whole pool? So, we don't need to have a dedicated pool in the
destination
>> > site to be mirrored, the only obligation is that the mirrored pools
must
>> > have the same name.
>> >
>> > In other words, We create two pools with the same name, one on the
source
>> > site the other on the destination site, we create the mirror link (one
>> way
>> > or two ways replication), then we choose what images to sync.
>> >
>> > Both pools can be used simultaneously on both sites, it's the mirrored
>> > images that cannot be used simultaneously, only promoted ones.
>> >
>> > Is this correct?
>> >
>> > Regards.
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>






_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux