Re: Using rbd-mirror in existing pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 14, 2020 at 12:47 PM Kees Meijs | Nefos <kees@xxxxxxxx> wrote:

> Hi Anthony,
>
> A one-way mirror suits fine in my case (the old cluster will be
> dismantled in mean time) so I guess a single rbd-mirror daemon should
> suffice.
>
> The pool consists of OpenStack Cinder volumes containing a UUID (i.e.
> volume-ca69183a-9601-11ea-8e82-63973ea94e82 and such). The change of
> conflicts is near to zero.
>
> My main concern is pulling images into a non-empty pool. It would be
> (very) bad if rbd-mirror tries to be smart and removes images that don't
> exist in the source pool.
>

rbd-mirror can only remove images that (1) have mirroring enabled and (2)
are not split-brained with its peer. It's totally fine to only mirror a
subset of images within a pool and it's fine to only mirror one-way.


>
> Regards and thanks again,
> Kees
>
> On 14-05-2020 17:41, Anthony D'Atri wrote:
> > When you set up the rbd-mirror daemons with each others’ configs, and
> initiate mirroring of a volume, the destination will create the volume in
> the destination cluster and pull over data.
> >
> > Hopefully you’re creating unique volume names so there won’t be
> conflicts, but that said if the destination has a collision, it won’t be
> overwritten.
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux