Re: [question] one-way RBD mirroring doesn't work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 23, 2018 at 10:56 AM sat <sat@xxxxxxxxxxxx> wrote:
>
> Hi,
>
>
> I'm trying to make a one-way RBD mirroed cluster between two Ceph clusters. But it
> hasn't worked yet. It seems to sucecss, but after making an RBD image from local cluster,
> it's considered as "unknown".
>
> ```
> $ sudo rbd --cluster local create rbd/local.img --size=1G --image-feature=exclusive-lock,journaling
> $ sudo rbd --cluster local ls rbd
> local.img
> $ sudo rbd --cluster remote ls rbd
> local.img
> $ sudo rbd --cluster local mirror pool status rbd
> health: WARNING
> images: 1 total
>     1 unknown
> $ sudo rbd --cluster remote mirror pool status rbd
> health: OK
> images: 1 total
>     1 replaying
> $
> ```
>
> Could you tell me what is wrong?

Nothing -- with one-directional RBD mirroring, on the receive side
would report status. If you started an rbd-mirror daemon against the
"local" cluster, it would report as healthy w/ that particular image
in the "stopped" state since it's primary.

>
> # detail
>
> There are two clusters, named "local" and "remote". "remote" is the mirror of "local".
> Both two clusters has a pool, named "rbd".
>
> ## system environment
>
> - OS: ubuntu 16.04
> - kernel: 4.4.0-112-generic
> - ceph: luminous 12.2.5
>
> ## system configuration diagram
>
> ==============================================================
> +- manager(192.168.33.2): manipulate two clusters,
> |
> +- node0(192.168.33.3): "local"'s MON, MGR, and OSD0
> |
> +- node1(192.168.33.4); "local"'s OSD1
> |
> +- node2(192.168.33.5); "local"'s OSD2
> |
> +- remote-node0(192.168.33.7): "remote"'s MON, MGR, OSD0, and ceph-rbd-mirror
> |
> +- remote-node1(192.168.33.8); "remote"'s OSD1
> |
> +- remote-node2(192.168.33.9); "remote"'s OSD2
> ================================================================
>
> # Step to reproduce
>
> 1. Prepare two clusters "local" and "remote"
>
> ```
> $ sudo ceph --cluster local -s
>   cluster:
>       id:     9faca802-745d-43d8-b572-16617e553a5f
>           health: HEALTH_WARN
>               application not enabled on 1 pool(s)
>
>   services:
>       mon: 1 daemons, quorum 0
>           mgr: 0(active)
>               osd: 3 osds: 3 up, 3 in
>
>   data:
>       pools:   1 pools, 128 pgs
>           objects: 16 objects, 12395 kB
>               usage:   3111 MB used, 27596 MB / 30708 MB avail
>                   pgs:     128 active+clean
>
>   io:
>       client:   852 B/s rd, 0 op/s rd, 0 op/s wr
>
> $ sudo ceph --cluster remote -s
>   cluster:
>       id:     1ecb0aa6-5a00-4946-bdba-bad78bfa4372
>           health: HEALTH_WARN
>                       application not enabled on 1 pool(s)
>
>   services:
>       mon:        1 daemons, quorum 0
>           mgr:        0(active)
>               osd:        3 osds: 3 up, 3 in
>                   rbd-mirror: 1 daemon active
>
>   data:
>       pools:   1 pools, 128 pgs
>           objects: 18 objects, 7239 kB
>               usage:   3100 MB used, 27607 MB / 30708 MB avail
>                   pgs:     128 active+clean
>
>   io:
>       client:   39403 B/s rd, 0 B/s wr, 4 op/s rd, 0 op/s wr
>
> $
> ````
>
>
> Two clusters looks fine.
>
> 2. Setup one-way RBD pool mirroring from "local" to "remote"
>
> Setup an RBD pool mirroring between "local" and "remote" with the following steps.
>
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/block_device_guide/block_device_mirroring
>
> Both cluster's status look fine as follows.
>
> ```
> $ sudo rbd --cluster local mirror pool info rbd
> Mode: pool
> Peers: none
> $ sudo rbd --cluster local mirror pool status rbd
> health: OK
> images: 0 total
> $ sudo rbd --cluster remote mirror pool info rbd
> Mode: pool
> Peers:
>   UUID                                 NAME  CLIENT
>   53fb3a9a-c451-4552-b409-c08709ebe1a9 local client.local
> $ sudo rbd --cluster remote mirror pool status rbd
> health: OK
> images: 0 total
> $
> ```
> 3. Create an RBD image
>
> ```
> $ sudo rbd --cluster local create rbd/local.img --size=1G --image-feature=exclusive-lock,journaling
> $ sudo rbd --cluster local ls rbd
> local.img
> $ sudo rbd --cluster remote ls rbd
> local.img
> $
> ```
>
> "rbd/local.img" seemd to be created and be mirrored fine.
>
> 4. Check both cluster's status and info
>
> Execute "rbd mirror pool info/status <pool>" and "info/status <rbd/local.img>" for
> both clusters.
>
> ## expected result
>
> Both "local" and "remote" report fine state.
>
> ## actual result
>
> Although "remote" works fine, but "local" reports seems not fine.
>
> ```
> $ sudo rbd --cluster local mirror pool info rbd
> Mode: pool
> Peers: none
> $ sudo rbd --cluster local mirror pool status rbd
> health: WARNING
> images: 1 total
>     1 unknown
> $ sudo rbd --cluster local info rbd/local.img
> rbd image 'local.img':
>         size 1024 MB in 256 objects
>         order 22 (4096 kB objects)
>         block_name_prefix: rbd_data.10336b8b4567
>         format: 2
>         features: exclusive-lock, journaling
>         flags:
>         create_timestamp: Mon Aug 20 06:01:29 2018
>         journal: 10336b8b4567
>         mirroring state: enabled
>         mirroring global id: 447731a7-73ce-448d-90ac-38d05065f603
>         mirroring primary: true
> $ sudo rbd --cluster local status rbd/local.img
> Watchers: none
> $ sudo rbd --cluster remote mirror pool info rbd
> Mode: pool
> Peers:
>   UUID                                 NAME  CLIENT
>     53fb3a9a-c451-4552-b409-c08709ebe1a9 local client.local
> $ sudo rbd --cluster remote mirror pool status rbd
> health: OK
> images: 1 total
>     1 replaying
> $ sudo rbd --cluster remote info rbd/local.img
> rbd image 'local.img':
>         size 1024 MB in 256 objects
>         order 22 (4096 kB objects)
>         block_name_prefix: rbd_data.1025643c9869
>         format: 2
>         features: exclusive-lock, journaling
>         flags:
>         create_timestamp: Mon Aug 20 06:01:29 2018
>         journal: 1025643c9869
>         mirroring state: enabled
>         mirroring global id: 447731a7-73ce-448d-90ac-38d05065f603
>         mirroring primary: false
> $ sudo rbd --cluster remote status rbd/local.img
> Watchers:
>         watcher=192.168.33.7:0/506894125 client.4133 cookie=139703408874736
> ```
>
> Thanks,
> Satoru Takeuchi
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux