Re: RBD Mirroring down+unknown

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 29, 2020 at 12:09 PM Miguel Castillo <Miguel.Castillo@xxxxxxxxxx>
wrote:

> Happy New Year Ceph Community!
>
> I'm in the process of figuring out RBD mirroring with Ceph and having a
> really tough time with it. I'm trying to set up just one way mirroring
> right now on some test systems (baremetal servers, all Debian 9). The first
> cluster is 3 nodes, and the 2nd cluster is 2 nodes (not worried about a
> properly performing setup, just the functionality of RBD mirroring right
> now). The purpose is to have a passive failover ceph cluster in a separate
> DC. Mirroring seems like the best solution, but if we can't get it working,
> we'll end up resorting to a scheduled rsync which is less than ideal. I've
> followed several guides, read through a lot of documentation, and nothing
> has worked for me thus far. If anyone can offer some troubleshooting help
> or insight into what I might have missed in this setup, I'd greatly
> appreciate it! I also don't fully understand the relationship between
> images and pools and how you're supposed to configure statically sized
> images for a pool that has a variable amount of dat
>  a, but that's a question for afterwards, I think :)
>
> Once RBD mirroring is set up, the mirror test image status shows as
> down+unknown:
>
> On ceph1-dc2:
> rbd --cluster dc1ceph mirror pool status fs_data --verbose
> health: WARNING
> images: 1 total
>     1 unknown
>
> mirror_test:
>   global_id:   c335017c-9b8f-49ee-9bc1-888789537c47
>   state:       down+unknown
>   description: status not found
>   last_update:
>
> Here are the commands I run using ceph-deploy on both clusters to get
> everything up and running (run from a deploy directory on the first node of
> each cluster). The clusters are created at the same time, and rbd setup
> commands are only run after the clusters are up and healthy, and the
> fs_data pool is created.
>
> -----------------------------------------------------------
>
> Cluster 1 (dc1ceph):
>
> ceph-deploy new ceph1-dc1 ceph2-dc1 ceph3-dc1
> sed -i '$ s,.*,public_network = *.*.*.0/24\n,g' ceph.conf
> ceph-deploy install ceph1-dc1 ceph2-dc1 ceph3-dc1 --release luminous
> ceph-deploy mon create-initial
> ceph-deploy admin ceph1-dc1 ceph2-dc1 ceph3-dc1
> ceph-deploy mgr create ceph1-dc1 ceph2-dc1 ceph3-dc1
> for x in b c d e f g h i j k; do ceph-deploy osd create --data
> /dev/sd${x}1 ceph1-dc1 ; done
> for x in b c d e f g h i j k; do ceph-deploy osd create --data
> /dev/sd${x}1 ceph2-dc1 ; done
> for x in b c d e f g h i j k; do ceph-deploy osd create --data
> /dev/sd${x}1 ceph3-dc1 ; done
> ceph-deploy mds create ceph1-dc1 ceph2-dc1 ceph3-dc1
> ceph-deploy rgw create ceph1-dc1 ceph2-dc1 ceph3-dc1
> for f in 1 2 ; do scp ceph.client.admin.keyring
> ceph$f-dc2:/etc/ceph/dc1ceph.client.admin.keyring ; done
> for f in 1 2 ; do scp ceph.conf ceph$f-dc2:/etc/ceph/dc1ceph.conf ; done
> for f in 1 2 ; do ssh ceph$f-dc2 "chown ceph.ceph /etc/ceph/dc1ceph*" ;
> done
> ceph osd pool create fs_data 512 512 replicated
> rbd --cluster ceph mirror pool enable fs_data image
> rbd --cluster dc2ceph mirror pool enable fs_data image
> rbd --cluster ceph mirror pool peer add fs_data client.admin@dc2ceph
> (generated id: b5e347b3-0515-4142-bc49-921a07636865)
> rbd create fs_data/mirror_test --size=1G
> rbd feature enable fs_data/mirror_test journaling
> rbd mirror image enable fs_data/mirror_test
> chown ceph.ceph ceph.client.admin.keyring
>
> Cluster 2 (dc2ceph):
>
> ceph-deploy new ceph1-dc2 ceph2-dc2
> sed -i '$ s,.*,public_network = *.*.*.0/24\n,g' ceph.conf
> ceph-deploy install ceph1-dc2 ceph2-dc2 --release luminous
> ceph-deploy mon create-initial
> ceph-deploy admin ceph1-dc2 ceph2-dc2
> ceph-deploy mgr create ceph1-dc2 ceph2-dc2
> for x in b c d e f g h i j k; do ceph-deploy osd create --data
> /dev/sd${x}1 ceph1-dc2 ; done
> for x in b c d e f g h i j k; do ceph-deploy osd create --data
> /dev/sd${x}1 ceph2-dc2 ; done
> ceph-deploy mds create ceph1-dc2 ceph2-dc2
> ceph-deploy rgw create ceph1-dc2 ceph2-dc2
> apt install rbd-mirror
> for f in 1 2 3 ; do scp ceph.conf ceph$f-dc1:/etc/ceph/dc2ceph.conf ; done
> for f in 1 2 3 ; do scp ceph.client.admin.keyring
> ceph$f-dc1:/etc/ceph/dc2ceph.client.admin.keyring ; done
> for f in 1 2 3 ; do ssh ceph$f-dc1 "chown ceph.ceph /etc/ceph/dc2ceph*" ;
> done
> ceph osd pool create fs_data 512 512 replicated
> rbd --cluster ceph mirror pool peer add fs_data client.admin@dc1ceph
> (generated id: e486c401-e24d-49bc-9800-759760822282)
> systemctl enable ceph-rbd-mirror@admin
> systemctl start ceph-rbd-mirror@admin
> rbd --cluster dc1ceph mirror pool status fs_data --verbose
>
>
> Cluster 1:
>
> ls /etc/ceph:
> ceph.client.admin.keyring
> ceph.conf
> dc2ceph.client.admin.keyring
> dc2ceph.conf
> rbdmap
> tmpG36OYs
>
> cat /etc/ceph/ceph.conf:
> [global]
> fsid = 8fede407-50e1-4487-8356-3dc98b30c500
> mon_initial_members = ceph1-dc1, ceph2-dc1, ceph3-dc1
> mon_host = *.*.*.1,*.*.*.27,*.*.*.41
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> public_network = *.*.*.0/24
>
> cat /etc/ceph/dc2ceph.conf
> [global]
> fsid = 813ff410-02dc-47bd-b678-38add38495bb
> mon_initial_members = ceph1-dc2, ceph2-dc2
> mon_host = *.*.*.56,*.*.*.0
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> public_network = *.*.*.0/24
>
>
> Cluster 2:
>
> ls /etc/ceph:
> ceph.client.admin.keyring
> ceph.conf
> dc1ceph.client.admin.keyring
> dc1ceph.conf
> rbdmap
> tmp_yxkPs
>
> cat /etc/ceph/ceph.conf
> [global]
> fsid = 813ff410-02dc-47bd-b678-38add38495bb
> mon_initial_members = ceph1-dc2, ceph2-dc2
> mon_host = *.*.*.56,*.*.*.70
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> public_network = *.*.*.0/24
>
> cat /etc/ceph/dc1ceph.conf
> [global]
> fsid = 8fede407-50e1-4487-8356-3dc98b30c500
> mon_initial_members = ceph1-dc1, ceph2-dc1, ceph3-dc1
> mon_host = *.*.*.1,*.*.*.27,*.*.*.41
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> public_network = *.*.*.0/24
>
>
> RBD Mirror daemon status:
>
> ceph-rbd-mirror@admin.service - Ceph rbd mirror daemon
>    Loaded: loaded (/lib/systemd/system/ceph-rbd-mirror@.service; enabled;
> vendor preset: enabled)
>    Active: inactive (dead) since Mon 2020-01-06 16:21:44 EST; 3s ago
>   Process: 910178 ExecStart=/usr/bin/rbd-mirror -f --cluster ${CLUSTER}
> --id admin --setuser ceph --setgroup ceph (code=exited, status=0/SUCCESS)
> Main PID: 910178 (code=exited, status=0/SUCCESS)
>
> Jan 06 16:21:44 ceph1-dc2 systemd[1]: Started Ceph rbd mirror daemon.
> Jan 06 16:21:44 ceph1-dc2 rbd-mirror[910178]: 2020-01-06 16:21:44.462916
> 7f76ecf88780 -1 auth: unable to find a keyring on
> /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
> (2) No such file or directory
> Jan 06 16:21:44 ceph1-dc2 rbd-mirror[910178]: 2020-01-06 16:21:44.462949
> 7f76ecf88780 -1 monclient: ERROR: missing keyring, cannot use cephx for
> authentication
> Jan 06 16:21:44 ceph1-dc2 rbd-mirror[910178]: failed to initialize: (2) No
> such file or directory2020-01-06 16:21:44.463874 7f76ecf88780 -1
> rbd::mirror::Mirror: 0x558d3ce6ce20 init: error connecting to local cluster
>

It seems like it's saying that "rbd-mirror" cannot access your keyring for
DC2. Does the "ceph" user have read permission to the keyring? Can you run
"sudo -u ceph ceph health" successfully?


>
> -------------------------------------------
>
> I also tried running the ExecStart command manually, substituting in
> different values for the parameters, and just never got it to work. If more
> info is needed, please don't hesitate to ask. Thanks in advance!
>
> -Miguel
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>

-- 
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux