Re: Cannot connect to the cluster after customizing the port

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi:
     The firewall has been confirmed to be closed. 
     The following is the startup log,and the ports of both nodes are 24017:
          Final graph:                                                                                                                                                                  
         +------------------------------------------------------------------------------+                                                                                              
       1: volume management                                                                                                                                                        
       2:     type mgmt/glusterd                                                                                                                                                   
       3:     option rpc-auth.auth-glusterfs on                                                                                                                                    
       4:     option rpc-auth.auth-unix on                                                                                                                                         
       5:     option rpc-auth.auth-null on
       6:     option rpc-auth-allow-insecure on
       7:     option transport.listen-backlog 1024
       8:     option max-port 60999
       9:     option base-port 49252
      10:     option event-threads 1
     11:     option ping-timeout 0
     12:     option transport.rdma.listen-port 24008
     13:     option transport.socket.listen-port 24017
     14:     option transport.socket.read-fail-log off
     15:     option transport.socket.keepalive-interval 2
     16:     option transport.socket.keepalive-time 10
     17:     option transport-type rdma
     18:     option working-directory /var/lib/glusterd
     19: end-volume                            
     20:                                       
     +------------------------------------------------------------------------------+
     [2020-12-08 05:44:37.300602] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0

    And the following is the log displayed after executing the command “gluster peer probe dev2”. It shows that the connected port is 24007. Is this port hard-coded?

 [2020-12-08 05:45:23.136309] I [MSGID: 106487] [glusterd-handler.c:1082:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req dev2 24007 
[2020-12-08 05:45:23.138990] I [MSGID: 106128] [glusterd-handler.c:3541:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: dev2 (24007) 
[2020-12-08 05:45:23.208257] W [MSGID: 106061] [glusterd-handler.c:3315:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout 
[2020-12-08 05:45:23.208387] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2020-12-08 05:45:23.255375] I [MSGID: 106498] [glusterd-handler.c:3470:glusterd_friend_add] 0-management: connect returned 0 
[2020-12-08 05:45:23.260180] I [MSGID: 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management: Peer <dev2> (<00000000-0000-0000-0000-000000000000>), in state <Establishing Connection>, has disconnected from glusterd. 
[2020-12-08 05:45:23.260997] I [MSGID: 106599] [glusterd-nfs-svc.c:161:glusterd_nfssvc_reconfigure] 0-management: nfs/server.so xlator is not installed 
[2020-12-08 05:45:23.261412] I [MSGID: 106544] [glusterd.c:152:glusterd_uuid_init] 0-management: retrieved UUID: 769fa9d7-a204-4c44-a15e-b5eac367e322 
[2020-12-08 05:45:23.261480] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600
[2020-12-08 05:45:23.261737] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped 
[2020-12-08 05:45:23.261780] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: quotad service is stopped 
[2020-12-08 05:45:23.261915] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600
[2020-12-08 05:45:23.262039] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped 
[2020-12-08 05:45:23.262064] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: bitd service is stopped 
[2020-12-08 05:45:23.262110] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600
[2020-12-08 05:45:23.262289] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped 
[2020-12-08 05:45:23.262331] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: scrub service is stopped




在 2020-12-08 05:01:12,"Strahil Nikolov" <hunter86_bg@xxxxxxxxx> 写道:

Are you sure that the firewall is open. Usually , there could be a firewall inbetween the nodes.

Also, you can run a tcpdump on nodeA and issue a peer probe from NodeB and see if there is anything hitting it.

What about enabling trace logs and checking for any clues. You can find details about the log level at : https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level Best Regards,
Strahil Nikolov

В 10:07 +0800 на 07.12.2020 (пн), sky написа:
Hi: 
1. The ports of both nodes are changed to 24017
2. After installation, change the port and start directly (non-restart)
3. I tried, but the nodes in the pool cannot connect after restarting. The log shows an attempt to connect to the default port

Steps(both node1 and node2)
   # yum install glusterfs glusterfs-server glusterfs-fuse
   # vim /etc/glusterfs/glusterd.vol
....
option transport.socket.listen-port 24017
....
# systemctl start glusterd

# gluster peer probe node2
Probe returned with Transport endpoint is not connected

After changing the port back to 24007 and restarting, the connection is normal







在 2020-12-06 19:27:01,"Strahil Nikolov" <hunter86_bg@xxxxxxxxx> 写道:
>Did you change the port on both nodes ?
>Did you restart glusterd on both nodes (the one you do peer probe and the other one that is being probed) ?
>
>Have you tried to first peer probe and then change the port on all nodes in the pool ?
>
>
>Best Regards,
>Strahil Nikolov
>
>
>
>
>
>В петък, 4 декември 2020 г., 12:18:42 Гринуич+2, sky <x_hsky@xxxxxxx> написа: 
>
>
>
>
>
>linux version: cenots 7.5 
>gluster verison: 7.5.1
>/etc/glusterfs/glusterd.vol: 
>volume management
>    type mgmt/glusterd
>    option working-directory /var/lib/glusterd
>    option transport-type socket,rdma
>    option transport.socket.keepalive-time 10
>    option transport.socket.keepalive-interval 2
>    option transport.socket.read-fail-log off
>    option transport.socket.listen-port 24017
>    option transport.rdma.listen-port 24008
>    option ping-timeout 0
>    option event-threads 1
>#   option lock-timer 180
>#   option transport.address-family inet6
>    option base-port 49252
>    option max-port  60999
>end-volume
>
>I started normally after changing the ports on both nodes(port from 24007 to 24017), but I cannot add nodes through the ‘gluster peer probe node2‘ command,
>It always prompts me: 
>   Probe returned with Transport endpoint is not connected
>
>
>
>
>
>
>
>
>
>
>
>
>
>________
>
>
>
>Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>Bridge: https://meet.google.com/cpu-eiue-hvk
>Gluster-users mailing list
>Gluster-users@xxxxxxxxxxx
>https://lists.gluster.org/mailman/listinfo/gluster-users


 



 

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux