Re: RBD Mirroring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sounds like ceph-ansible only supports one pool? I don't know, I've never used ceph-ansible. But if it created a rbd-mirror setup successfully you should be able to configure more pools to be mirrored manually as described in the docs [1].

[1] https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#pool-configuration

Zitat von Michel Niyoyita <micou12@xxxxxxxxx>:

Thank you Eugen , all errors have been solved now is syncing on pool mode
level ,  I am trying to use two or more pools but I sill able using the
first one defined as it is in this configurations:

ceph_rbd_mirror_configure: true
ceph_rbd_mirror_mode: "pool"
ceph_rbd_mirror_pool: "data"
ceph_rbd_mirror_pool: "data_tes1"
ceph_rbd_mirror_remote_cluster: "site-a"
ceph_rbd_mirror_remote_user: "client.site-a"
ceph_rbd_mirror_remote_key: "AQB+wc1l4SFqNBAAG2I18SjJcnMN/wP/xdAUNw=="
ceph_rbd_mirror_remote_mon_hosts: "mon-ip:3300"

here I am getting only data pool , data_tes1 is not syncing , I need a help
on how  we can define more pools, because in our production cluster we have
4 pools images, volume .... which we want to backup.

Thank you for your help

Michel

On Tue, Feb 13, 2024 at 8:11 PM Eugen Block <eblock@xxxxxx> wrote:

So the error you reported first is now resolved? What does the mirror
daemon log?

Zitat von Michel Niyoyita <micou12@xxxxxxxxx>:

> I have configured it as follow :
>
> ceph_rbd_mirror_configure: true
> ceph_rbd_mirror_mode: "pool"
> ceph_rbd_mirror_pool: "images"
> ceph_rbd_mirror_remote_cluster: "prod"
> ceph_rbd_mirror_remote_user: "admin"
> ceph_rbd_mirror_remote_key: "AQDGVctluyvAHRAAtjeIB3ZZ75L8yT/erZD7eg=="
> ceph_rbd_mirror_remote_mon_hosts: "mon-ip:3300"
>
> This the configs of rbdmirrors.yml
>
> Michel
>
> On Tue, Feb 13, 2024 at 4:07 PM Eugen Block <eblock@xxxxxx> wrote:
>
>> You didn't answer if the remote_key is defined. If it's not then your
>> rbd-mirror daemon won't work which confirms what you pasted (daemon
>> health: ERROR). You need to fix that first.
>>
>> Zitat von Michel Niyoyita <micou12@xxxxxxxxx>:
>>
>> > Thanks Eugen,
>> >
>> > On my prod Cluster (as named it) this is the output the following
command
>> > checking the status : rbd mirror pool status images --cluster prod
>> > health: WARNING
>> > daemon health: UNKNOWN
>> > image health: WARNING
>> > images: 4 total
>> >     4 unknown
>> >
>> > but on bup cluster there are some errors which I am not able to fetch
>> out:
>> > rbd mirror pool status images --cluster bup
>> > health: ERROR
>> > daemon health: ERROR
>> > image health: OK
>> > images: 0 total
>> >
>> > so once create an images on the prod cluster , there is no syncing
>> between
>> > two cluster . but I can create an image from one cluster to another
which
>> > means there is communication between . but pool images is not syncing.
>> > Kindly if I miss something help me again.
>> >
>> > Michel
>> >
>> > On Tue, Feb 13, 2024 at 3:41 PM Eugen Block <eblock@xxxxxx> wrote:
>> >
>> >> Did you define ceph_rbd_mirror_remote_key? According to the docs [1]:
>> >>
>> >> > ceph_rbd_mirror_remote_key : This must be the same value as the
user
>> >> > ({{ ceph_rbd_mirror_local_user }}) keyring secret from the primary
>> >> > cluster.
>> >>
>> >> [1]
>> >>
>>
https://docs.ceph.com/projects/ceph-ansible/en/latest/rbdmirror/index.html
>> >>
>> >> Zitat von Michel Niyoyita <micou12@xxxxxxxxx>:
>> >>
>> >> > Hello team,
>> >> >
>> >> > I have two clusters in testing environment deployed using
ceph-ansible
>> >> > running on ubuntu 20.04 with ceph Pacific version . I am testing
>> >> mirroring
>> >> > between two clusters , in pool mode . Our production Cluster is for
>> >> backend
>> >> > storage for openstack.  This is how I configured the rbdmirros.yml:
>> >> >
>> >> > ceph_rbd_mirror_configure: true
>> >> > ceph_rbd_mirror_mode: "pool"
>> >> > ceph_rbd_mirror_remote_cluster: "prod"
>> >> > ceph_rbd_mirror_remote_user: "admin"
>> >> >
>> >> > This is my primary cluster "/etc/ceph/ "directory:
>> >> >
>> >> > root@ceph-osd1:/etc/ceph# ls -l
>> >> > total 52
>> >> > -r-------- 1 root root  151 Feb 13 08:47 bup.client.admin.keyring
>> >> > -rw-r--r-- 1 root root  866 Feb 13 08:45 bup.conf
>> >> > -r-------- 1 ceph ceph  151 Feb 12 13:35 ceph.client.admin.keyring
>> >> > -rw------- 1 ceph ceph  131 Feb 12 13:41 ceph.client.crash.keyring
>> >> > -rw-r--r-- 1 ceph ceph  863 Feb 12 13:35 ceph.conf
>> >> > -rw-rw-r-- 1 ceph ceph 1294 Feb 12 13:40 ceph-dashboard.crt
>> >> > -rw------- 1 ceph ceph 1704 Feb 12 13:40 ceph-dashboard.key
>> >> > -r-------- 1 ceph ceph  140 Feb 12 13:36 ceph.mgr.ceph-osd1.keyring
>> >> > -r-------- 1 ceph ceph  140 Feb 12 13:36 ceph.mgr.ceph-osd2.keyring
>> >> > -r-------- 1 ceph ceph  140 Feb 12 13:36 ceph.mgr.ceph-osd3.keyring
>> >> > -r-------- 1 root root  151 Feb 13 08:38 prod.client.admin.keyring
>> >> > -rw-r--r-- 1 root root  863 Feb 13 08:37 prod.conf
>> >> > -rw-r--r-- 1 root root   92 Aug 29 16:38 rbdmap
>> >> >
>> >> >
>> >> > and this is my secondary "/etc/ceph" directory:
>> >> >
>> >> > root@ceph-osdb1:/etc/ceph# ls -l
>> >> > total 60
>> >> > -rw------- 1 root root    0 Feb 13 09:26
ansible.1e1q9lzv_ceph-ansible
>> >> > -rw------- 1 root root    0 Feb 13 09:32
ansible.dk1h4kzp_ceph-ansible
>> >> > -r-------- 1 root root  151 Feb 13 09:02 bup.client.admin.keyring
>> >> > -rw-r--r-- 1 root root  866 Feb 13 09:02 bup.conf
>> >> > -r-------- 1 ceph ceph  151 Feb 13 08:23 ceph.client.admin.keyring
>> >> > -rw------- 1 ceph ceph  131 Feb 13 08:29 ceph.client.crash.keyring
>> >> > -rw------- 1 ceph ceph  138 Feb 13 09:19
>> >> > ceph.client.rbd-mirror.ceph-osdb1.keyring
>> >> > -rw------- 1 ceph ceph  132 Feb 13 09:19
>> >> ceph.client.rbd-mirror-peer.keyring
>> >> > -rw-r--r-- 1 ceph ceph  866 Feb 13 08:23 ceph.conf
>> >> > -rw-rw-r-- 1 ceph ceph 1302 Feb 13 08:28 ceph-dashboard.crt
>> >> > -rw------- 1 ceph ceph 1708 Feb 13 08:28 ceph-dashboard.key
>> >> > -r-------- 1 ceph ceph  141 Feb 13 08:24
ceph.mgr.ceph-osdb1.keyring
>> >> > -r-------- 1 ceph ceph  141 Feb 13 08:24
ceph.mgr.ceph-osdb2.keyring
>> >> > -r-------- 1 ceph ceph  141 Feb 13 08:24
ceph.mgr.ceph-osdb3.keyring
>> >> > -r-------- 1 root root  151 Feb 13 09:01 prod.client.admin.keyring
>> >> > -rw-r--r-- 1 root root  863 Feb 13 09:01 prod.conf
>> >> > -rw-r--r-- 1 root root   92 Aug 29 16:38 rbdmap
>> >> >
>> >> >
>> >> > Kindly I need your help if I missing something because while
running
>> I am
>> >> > facing the following error:
>> >> >
>> >> > TASK [ceph-rbd-mirror : create a temporary file]
>> >> >
>> >>
>>
**************************************************************************************
>> >> > Tuesday 13 February 2024  09:58:59 +0000 (0:00:00.349)
>>  0:04:00.698
>> >> > ******
>> >> > changed: [ceph-osdb1 -> ceph-osdb1]
>> >> >
>> >> > TASK [ceph-rbd-mirror : write secret to temporary file]
>> >> >
>> >>
>>
*******************************************************************************
>> >> > Tuesday 13 February 2024  09:58:59 +0000 (0:00:00.533)
>>  0:04:01.232
>> >> > ******
>> >> > fatal: [ceph-osdb1 -> {{ groups[mon_group_name][0] }}]: FAILED! =>
>> >> >   msg: |-
>> >> >     The task includes an option with an undefined variable. The
error
>> >> was:
>> >> > 'ceph_rbd_mirror_remote_key' is undefined
>> >> >
>> >> >     The error appears to be in
>> >> >
>> '/opt/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml':
>> >> > line 150, column 7, but may
>> >> >     be elsewhere in the file depending on the exact syntax problem.
>> >> >
>> >> >     The offending line appears to be:
>> >> >
>> >> >
>> >> >         - name: write secret to temporary file
>> >> >           ^ here
>> >> >
>> >> >
>> >> >
>> >> > Kindly help
>> >> > _______________________________________________
>> >> > ceph-users mailing list -- ceph-users@xxxxxxx
>> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >>
>> >>
>> >> _______________________________________________
>> >> ceph-users mailing list -- ceph-users@xxxxxxx
>> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >>
>>
>>
>>
>>






_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux