Issue with geo-replication and nfs auth

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks all for answers.

I think my issue was related to slave server. I have reboot it and now geo-replication works. No more this issue after some start/stop.  


For information before rebooting (with issue) :

# gluster volume geo-replication test ssh://root at slave.mydomain.com:file:///data/test config 
log_level: DEBUG
gluster_log_file: /var/log/glusterfs/geo-replication/test/ssh%3A%2F%2Froot%40slave.mydomain.com%3Afile%3A%2F%2F%2Fdata%2Ftest.gluster.log
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /etc/glusterd/geo-replication/secret.pem
session_owner: c746cb97-91e4-489c-81e0-7e86b6dc465f
remote_gsyncd: /usr/local/libexec/glusterfs/gsyncd
state_file: /etc/glusterd/geo-replication/test/ssh%3A%2F%2Froot%40slave.mydomain.com%3Afile%3A%2F%2F%2Fdata%2Ftest.status
pid_file: /etc/glusterd/geo-replication/test/ssh%3A%2F%2Froot%40slave.mydomain.com%3Afile%3A%2F%2F%2Fdata%2Ftest.pid
log_file: /var/log/glusterfs/geo-replication/test/ssh%3A%2F%2Froot%40slave.mydomain.com%3Afile%3A%2F%2F%2Fdata%2Ftest.log
gluster_command: /usr/sbin/glusterfs --xlator-option *-dht.assert-no-child-down=true


# gluster volume geo-replication file:///data/test config 
log_file: /var/log/glusterfs/geo-replication-slaves/${session_owner}:file%3A%2F%2F%2Fdata%2Ftest.log
log_level: DEBUG
gluster_log_file: /var/log/glusterfs/geo-replication-slaves/${session_owner}:file%3A%2F%2F%2Fdata%2Ftest.gluster.log
gluster_command: /usr/sbin/glusterfs --xlator-option *-dht.assert-no-child-down=true

# gluster volume geo-replication test slave.mydomain.com:/data/test/ start
Starting geo-replication session between test & slave.mydomain.com:/data/test/ has been successful


# cat ssh%3A%2F%2Froot%40slave.mydomain.com%3Afile%3A%2F%2F%2Fdata%2Ftest.log 
[2011-05-03 14:46:25.828520] I [monitor(monitor):19:set_state] Monitor: new state: starting...
[2011-05-03 14:46:25.857012] I [monitor(monitor):42:monitor] Monitor: ------------------------------------------------------------
[2011-05-03 14:46:25.857444] I [monitor(monitor):43:monitor] Monitor: starting gsyncd worker
[2011-05-03 14:46:25.957925] I [gsyncd:287:main_i] <top>: syncing: gluster://localhost:test -> ssh://slave.mydomain.com:/data/test/
[2011-05-03 14:46:25.974828] D [repce:131:push] RepceClient: call 8945:139685882976000:1304426785.97 __repce_version__() ...
[2011-05-03 14:46:26.286147] E [syncdutils:131:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/syncdutils.py", line 152, in twrap
    tf(*aa)
  File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/repce.py", line 118, in listen
    rid, exc, res = recv(self.inf)
  File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/repce.py", line 42, in recv
    return pickle.load(inf)
EOFError
[2011-05-03 14:46:26.433099] D [monitor(monitor):57:monitor] Monitor: worker got connected in 0 sec, waiting 59 more to make sure it's fine
[211-05-03 14:47:25.922732] I [monitor(monitor):19:set_state] Monitor: new state: faulty

And:
>on slave i don't see process like /usr/bin/python /usr/lib/glusterfs/glusterfs/python/syncdaemon/gsyncd.py --session-owner c746cb97-91e4-489c-81e0-7e86b6dc465f -N --listen --timeout 120 file:///data/test
>I've no logs for slave on master or slave server in /var/log/glusterfs/geo-replication-slaves. I've started glusterd on slave. 
>there are no firewall between and on servers. Ssh is working between master and slave.


-- 

C?dric Lagneau


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux