Re: rbd-mirror with snapshot, not doing any actaul data sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jun 7, 2020 at 8:06 AM Hans van den Bogert <hansbogert@xxxxxxxxx> wrote:
>
> Hi list,
>
> I've awaited octopus for a along time to be able to use mirror with
> snapshotting, since my setup does not allow for journal based
> mirroring. (K8s/Rook 1.3.x with ceph 15.2.2)
>
> However, I seem to be stuck, i've come to the point where on the
> cluster on which the (non-active) replicas should reside I get this:
>
> ```
> rbd mirror pool status -p replicapool --verbose
>
> ...
> pvc-f7ca0b55-ed38-4d9f-b306-7db6a0157e2e:
>   global_id:   d3a301f2-4f54-4e9e-b251-c55ddbb67dc6
>   state:       up+starting_replay
>   description: starting replay
>   service:     a on nldw1-6-26-1
>   last_update: 2020-06-07 11:54:54
> ...
> ```
>
> That seems good, right? But I don't see any actual data being copied
> into the failover cluster.
>
> Anybody any ideas what to check?

Can you look at the log files for "rbd-mirror" daemon? I wonder if it
starts and quickly fails.

> Also, is it correct, you won't see mirror snapshots with the 'normal'
> `rbd snap` commands?

Yes, "rbd snap ls" only shows user-created snapshots by default. You
can use "rbd snap ls --all" to see all snapshots on an image.

> Thanks in advance,
>
> Hans
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux