RBD Mirror: Unable to re-bootstrap mirror daemons

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I had a working setup initially in my test clusters with 2 daemons
running on MON nodes of each cluster. I took them down, uninstalled
and purged rbd-mirror (apt-get uninstall and apt-get purge) before
installing them again, on the respective clusters. They now refuse to
come back up or talk to each other.

I see the following messages in the log/stdout:

2017-03-18 13:26:31.444506 7fd2f1fd7c80 -1 rbd::mirror::Replayer:
0x7fd2fb5d1680 init_rados: error connecting to remote peer uuid:
6a98a0eb-869d-4b4f-8bc7-da4bbe66e5aa cluster: ceph3 client:
client.rbd-mirror-remote: (22) Invalid argument


I had made the following changes between the uninstall - reinstalls.

1. I had "client.rbd-mirror-remote" user on each cluster for their
peers to talk to (each other). This user was deleted from `auth` list
before I reinstalled the daemon.
2. I installed rbd-mirror then created two new local and remote users
appending 01 as their suffix with their respective keys in the same
location. The copies of remote keys have been duplicated over to their
peers, as was done with the earlier rbd mirror setup that worked.

rbd-mirror on either cluster failed to start with the above error.

I even tried creating a new remote user with the same name (on both
clusters) as before, but it still fails with the same error message.

Any idea what I might be doing wrong? Is there a way to proper purge a
rbd-mirror daemon and start all over again?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux