You can try to run a ncat from gfs3:
ncat -z -v gfs1 49152
ncat -z -v gfs2 49152
If ncat fails to connect -> it's definately a firewall.
Best Regards,
Strahil Nikolov
On May 30, 2019 01:33, David Cunningham <dcunningham@xxxxxxxxxxxxx> wrote:
Hi Ravi,I think it probably is a firewall issue with the network provider. I was hoping to see a specific connection failure message we could send to them, but will take it up with them anyway.Thanks for your help.On Wed, 29 May 2019 at 23:10, Ravishankar N <ravishankar@redhat.com> wrote:I don't see a "Connected to gvol0-client-1" in the log. Perhaps a firewall issue like the last time? Even in the earlier add-brick log from the other email thread, connection to the 2nd brick was not established.
-Ravi
On 29/05/19 2:26 PM, David Cunningham wrote:
Hi Ravi and Joe,
The command "gluster volume status gvol0" shows all 3 nodes as being online, even on gfs3 as below. I've attached the glfsheal-gvol0.log, in which I can't see anything like a connection error. Would you have any further suggestions? Thank you.
[root@gfs3 glusterfs]# gluster volume status gvol0
Status of volume: gvol0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfs1:/nodirectwritedata/gluster/gvol0 49152 0 Y 7706
Brick gfs2:/nodirectwritedata/gluster/gvol0 49152 0 Y 7625
Brick gfs3:/nodirectwritedata/gluster/gvol0 49152 0 Y 7307
Self-heal Daemon on localhost N/A N/A Y 7316
Self-heal Daemon on gfs1 N/A N/A Y 40591
Self-heal Daemon on gfs2 N/A N/A Y 7634
Task Status of Volume gvol0
------------------------------------------------------------------------------
There are no active volume tasks
On Wed, 29 May 2019 at 16:26, Ravishankar N <ravishankar@redhat.com> wrote:
On 29/05/19 6:21 AM, David Cunningham wrote:
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users