----- Original Message ----- > From: "Danny Lee" <danny.lee@xxxxxxxxxx> > To: gluster-users@xxxxxxxxxxx > Sent: Thursday, August 4, 2016 1:25:43 AM > Subject: Reconnecting Client to Brick > > Hi, > > I have a 3-node replicated cluster using the native glusterfs mount, and > through some heavy IO load, the gluster logs show that one of the clients > (Client A) disconnected from one of the bricks (Brick 1) because of a 42 > second ping timeout. How easy it is for you to reproduce the issue? Is it possible to capture some data for us when you run into this problem? A generic reproducer (which can be used even on our machines) would be even great :). Before starting the test, 1. Apply patch at http://review.gluster.org/15109 to your gluster codebase and rebuild, reinstall, restart and remount glusterfs (please restart the bricks too). 2. Start capturing strace output of glusterfs mount process and brick process. 3. Start capturing tcpdump of the connection between client and brick process. After you hit the bug, stop the test and send us back: 1. Logs of mount process and the brick process whose connection has gone bad. 2. strace output of glusterfs mount process and brick process whose connection has gone bad. 3. tcpdump output > > After waiting two hours, Client A never reconnected back to Brick 1, even > after stopping the heavy IO load. To verify, I added a new file to the mount > on Client A and verified that Brick 1 did not get the file. Also verified > that when calling "gluster volume heal <vol> info", the new file appears on > the heal list. This file never gets healed. > > Then I tried to add a file to Client B and verified that the file got added > to Brick 1. Which means Brick 1 is only disconnected from Client A. > > I have 3 questions: > 1. How can you tell if a client has disconnected from a brick (for monitoring > purposes)? Right now, I am doing a hacky method by looking at the client > logs and looking for specific messages. We have a meta xlator loaded on native fuse mounts, which allows users to peek into internal data structures of glusterfs. [root@unused glusterfs]# gluster volume info newptop Volume Name: newptop Type: Distribute Volume ID: 62981360-9417-4a27-ba82-4647e76061d4 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: booradley:/home/export/newptop Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: off [root@unused glusterfs]# mount booradley:/newptop on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [root@unused glusterfs]# cat /mnt/glusterfs/.meta/graphs/active/newptop-client-0/private [xlator.protocol.client.newptop-client-0.priv] connecting = 0 connected = 1 total_bytes_read = 56188 ping_timeout = 42 total_bytes_written = 61436 ping_msgs_sent = 6 msgs_sent = 283 Note that we are accessing internals of newptop-client-0 which is a protocol/client xlator. As can be seen above it says connected. However for the purpose of our debugging, we need the status of socket level connection too, which is not available through meta interface yet. I've filed a bug to track the same [1]. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1365085 > 2. How long does it take for a client to reconnect to a brick or does it > ever? client has a reconnect timer event that tries reconnecting at progressively longer duration. Once connection is successful, the event is removed. Ideally we should see connection being setup in a span of few minutes after brick is available. Why it took so long for you is a puzzle that needs to be solved :). Also, we have a bug where connection is never established (though in glusterd process) at [2]. [2] https://bugzilla.redhat.com/show_bug.cgi?id=1300241 > 3. If it doesn't, is there something I can do to reconnect without losing > quorum? > 4. If it does, is this configurable? > > Thank you. > > > This message and any attachments are solely for the intended recipient. If > you are not the intended recipient, disclosure, copying, use, or > distribution of the information included in this message is prohibited -- > please immediately and permanently delete this message. > > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users