geo repl status: faulty & errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi everone,

trying geo-repl first, I've followed that official howto and the process claimed "success" up until I went for status: "Faulty"
Errors I see:
...
[2017-02-01 12:11:38.103259] I [monitor(monitor):268:monitor] Monitor: starting gsyncd worker
[2017-02-01 12:11:38.342930] I [changelogagent(agent):73:__init__] ChangelogAgent: Agent listining...
[2017-02-01 12:11:38.354500] I [gsyncd(/__.aLocalStorages/1/0-GLUSTERs/1GLUSTER-QEMU-VMs):736:main_i] <top>: syncing: gluster://localhost:QEMU-VMs -> ssh://root@10.5.6.32:gluster://localhost:QEMU-VMs-Replica
[2017-02-01 12:11:38.581310] E [syncdutils(/__.aLocalStorages/1/0-GLUSTERs/1GLUSTER-QEMU-VMs):252:log_raise_exception] <top>: connection to peer is broken
[2017-02-01 12:11:38.581964] E [resource(/__.aLocalStorages/1/0-GLUSTERs/1GLUSTER-QEMU-VMs):234:errlog] Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-VLX7ff/2bad8986ecbd9ad471c368528e0770f6.sock root@10.5.6.32 /nonexistent/gsyncd --session-owner 8709782a-daa5-4434-a816-c4e0aef8fef2 -N --listen --timeout 120 gluster://localhost:QEMU-VMs-Replica" returned with 255, saying:
[2017-02-01 12:11:38.582236] E [resource(/__.aLocalStorages/1/0-GLUSTERs/1GLUSTER-QEMU-VMs):238:logerr] Popen: ssh> Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
[2017-02-01 12:11:38.582945] I [syncdutils(/__.aLocalStorages/1/0-GLUSTERs/1GLUSTER-QEMU-VMs):220:finalize] <top>: exiting.
[2017-02-01 12:11:38.586689] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF.
[2017-02-01 12:11:38.587055] I [syncdutils(agent):220:finalize] <top>: exiting.
[2017-02-01 12:11:38.586905] I [monitor(monitor):334:monitor] Monitor: worker(/__.aLocalStorages/1/0-GLUSTERs/1GLUSTER-QEMU-VMs) died before establishing connection

It's a bit puzzling as password-less ssh works, I had it before gluster so I also tried "create no-verify" just in case.
This - (/__.aLocalStorages/1/0-GLUSTERs/1GLUSTER-QEMU-VMs) - is master volume so I understand it is the slave failing here, right?
This is just one peer of two-peer volume, I'd guess process does not even go to the second for the first one fails, thus not in the logs, correct?

many thanks for all the help,
L.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux