Re: [question] one-way RBD mirroring doesn't work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 26, 2019 at 7:54 AM V A Prabha <prabhav@xxxxxxx> wrote:
>
> Dear Jason
>   I shall explain my setup first
>   The DR centre is 300 Kms apart from the site
>   Site-A   - OSD 0 - 1 TB  Mon - 10.236.248.XX /24
>   Site-B   - OSD 0  - 1 TB  Mon - 10.236.228.XX/27  - RBD-Mirror deamon running
>   All ports are open and no firewall..Connectivity is there between
>
>   My initial setup I used a common L2 connectivity between both the sites..The same error as now
>   I have changed the configuration to L3 still I get the same
>
> root@meghdootctr:~# rbd mirror image status volumes/meghdoot
> meghdoot:
>   global_id:   52d9e812-75fe-4a54-8e19-0897d9204af9
>   state:       up+syncing
>   description: bootstrapping, IMAGE_COPY/COPY_OBJECT 0%
>   last_update: 2019-08-26 17:00:21
> Please do specify where I do the mistake or whats wrong with my configuration

No clue what's wrong w/ your site. Best suggestion that I could offer
would be to enable "debug rbd_mirror=20" / "debug rbd=20" logging for
rbd-mirror and see where it's hanging.

> Site-A Site-B
>  [global]
> fsid = 494971c1-75e7-4866-b9fb-e98cb8171473
> mon_initial_members = clouddr
> mon_host = 10.236.247.XX
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> public network = 10.236.247.0/24
> osd pool default size = 1
> mon_allow_pool_delete= true
> rbd default features = 125[global]
> fsid = 494971c1-75e7-4866-b9fb-e98cb8171473
> mon_initial_members = meghdootctr
> mon_host = 10.236.228.XX
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> public network = 10.236.228.64/27
> osd pool default size = 1
> mon_allow_pool_delete= true
> rbd default features = 125
>
> Regards
> V.A.Prabha
>
> On August 20, 2019 at 7:00 PM Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>
> On Tue, Aug 20, 2019 at 9:23 AM V A Prabha < prabhav@xxxxxxx> wrote:
>
> I too face the same problem as mentioned by Sat
>   All the images created at the primary site are in the state : down+ unknown
>   Hence in the secondary site the images is 0 % up + syncing all time ....No progress
>   The only error log that is continuously hitting is
>   2019-08-20 18:04:38.556908 7f7d4cba3700 -1 rbd::mirror::InstanceWatcher: C_NotifyInstanceRequest: 0x7f7d4000f650 finish: resending after timeout
>
>
> This sounds like your rbd-mirror daemon cannot contact all OSDs. Double check your network connectivity and firewall to ensure that rbd-mirror daemon can connect to *both* Ceph clusters (local and remote).
>
>
>
>
>   The setup is as follows
>    One OSD created in the primary site with cluster name [site-a] and one OSD created in the secondary site with cluster name [site-b] both have the same ceph.conf file
>    RBD mirror is installed at the secondary site [ which is 300kms away from the primary site]
>    We are trying to merge this with our Cloud but the cinder volume fails syncing everytime
>   Primary Site Output
>     root@clouddr:/etc/ceph# rbd mirror pool status volumesnew --verbose
>     health: WARNING
>     images: 4 total
>     4 unknown
>     boss123:
>      global_id:   7285ed6d-46f4-4345-b597-d24911a110f8
>      state:       down+unknown
>      description: status not found
>      last_update:
>      new123:
>      global_id:   e9f2dd7e-b0ac-4138-bce5-318b40e9119e
>      state:       down+unknown
>      description: status not found
>      last_update:
>
> root@clouddr:/etc/ceph# rbd mirror pool info volumesnew
> Mode: pool
> Peers: none
> root@clouddr:/etc/ceph# rbd mirror pool status volumesnew
> health: WARNING
> images: 4 total
>     4 unknown
>
> Secondary Site
> root@meghdootctr:~# rbd mirror image status volumesnew/boss123
> boss123:
>   global_id:   7285ed6d-46f4-4345-b597-d24911a110f8
>   state:       up+syncing
>   description: bootstrapping, IMAGE_COPY/COPY_OBJECT 0%
>   last_update: 2019-08-20 17:24:18
> Please help me to identify where do I miss something
>
> Regards
> V.A.Prabha
>
> ------------------------------------------------------------------------------------------------------------
> [ C-DAC is on Social-Media too. Kindly follow us at:
> Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]
>
> This e-mail is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. If you are not the
> intended recipient, please contact the sender by reply e-mail and destroy
> all copies and the original message. Any unauthorized review, use,
> disclosure, dissemination, forwarding, printing or copying of this email
> is strictly prohibited and appropriate legal action will be taken.
> ------------------------------------------------------------------------------------------------------------
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Jason
>
>
>
>
> ------------------------------------------------------------------------------------------------------------
> [ C-DAC is on Social-Media too. Kindly follow us at:
> Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]
>
> This e-mail is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. If you are not the
> intended recipient, please contact the sender by reply e-mail and destroy
> all copies and the original message. Any unauthorized review, use,
> disclosure, dissemination, forwarding, printing or copying of this email
> is strictly prohibited and appropriate legal action will be taken.
> ------------------------------------------------------------------------------------------------------------



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux