Can you send the volume specification files of two servers? Are the two afrs in server configuration have children in the same order?
regards,
Hi,
I've tried setting up the HA translator using glusterfs-2.0.0rc1 and fuse-2.7.4glfs11. I have a 2 clients connecting to the 2 glusterfs-servers (lb1 and lb2) which run server-side AFR between themselves. To test if the HA works, I shutdown the lb1. I could see both the clients switching to lb2 in the log (fuse-bridge: inode num of /globalshare changed 54394886 -> 30605318) so it works fine. However when I restarted lb1 and try to shutdown lb2, 1 of the clients report "Transport endpoint is not connected" and the logs are as below. Can someone advise if HA translator is currently stable?
1: #############################################
2: ## GlusterFS Client Volume Specification ##
3: #############################################
4:
5: # the exported volume to mount # required!
6: volume cluster
7: type protocol/client
8: option transport-type tcp/client
10: option remote-host lb1.world.net
11: option remote-subvolume gfs # exported volume
12: option transport-timeout 5 # value in seconds, should be relatively low
13: end-volume
14:
15: # the exported volume to mount # required!
16: volume cluster2
17: type protocol/client
18: option transport-type tcp/client
20: option remote-host lb2.world.net
21: option remote-subvolume gfs # exported volume
22: option transport-timeout 5 # value in seconds, should be relatively low
23: end-volume
24:
25: volume glusterfs-ha
26: type cluster/ha
27: subvolumes cluster cluster2
28: end-volume
29:
30:
31:
+-----
2009-02-05 18:08:01 E [socket.c:708:socket_connect_finish] cluster2: connection failed (Connection refused)
2009-02-05 18:08:06 W [fuse-bridge.c:304:need_fresh_lookup] fuse-bridge: inode num of /globalshare changed 54394886 -> 30605318
2009-02-05 18:09:25 E [socket.c:104:__socket_rwv] cluster: readv failed (Connection reset by peer)
2009-02-05 18:09:25 E [socket.c:566:socket_proto_state_machine] cluster: socket read failed (Connection reset by peer) in state 1 (192.168.89.151:6996)
2009-02-05 18:09:25 E [saved-frames.c:148:saved_frames_unwind] cluster: forced unwinding frame type(1) op(READ)
Regards,
melvin
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel
--
Raghavendra G