Re: never ending logging

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

No operation on any volume nor brick, the only change was SSL certificate renewal on 3 nodes and all clients. Then, node 2 was rejected and I applied following steps to fix : https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Resolving%20Peer%20Rejected/
I also saw https://docs.gluster.org/en/latest/Troubleshooting/troubleshooting-glusterd/ but solution wasn't compatible as cluster.max-op-version doesn't exist and all op-version are the same on all 3 nodes.

The strange thing is error "failed to fetch volume file" occurs on the node owning the brick, does it means it can't access it's own brick ?

Regards,
Nicolas.


De: "Nikhil Ladha" <nladha@xxxxxxxxxx>
À: nico@xxxxxxxxxx
Cc: "gluster-users" <gluster-users@xxxxxxxxxxx>
Envoyé: Mardi 28 Avril 2020 07:43:20
Objet: Re: never ending logging

Hi,
Since, all things are working fine except few bricks which are not coming up, I doubt there is any issue with gluster itself. Did you by chance made any changes to those bricks or the volume or the node to which they are linked?
And as far as SSL logs are concerned, I am looking into that matter.

Regards
Nikhil Ladha


On Mon, Apr 27, 2020 at 7:17 PM <nico@xxxxxxxxxx> wrote:
Thanks for reply.

I updated storage pool in 7.5 and restarted all 3 nodes sequentially.
All nodes now appear in Connected state from every node and gluster volume list show all 74 volumes.
SSL log lines are still flooding glusterd log file on all nodes but don't appear on grick log files. As there's no information about volume nor client on these lines I'm not able to check if a certain volume produce this error or not.
I alos tried pstack after installing Debian package glusterfs-dbg but still getting "No symbols" error

I found that 5 brick processes didn't start on node 2 and 1 on node 3
[2020-04-27 11:54:23.622659] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 7.5 (args: /usr/sbin/glusterfsd -s glusterDevVM2 --volfile-id svg_pg_wed_dev_bkp.glusterDevVM2.bricks-svg_pg_wed_dev_bkp-brick1-data -p /var/run/gluster/vols/svg_pg_wed_dev_bkp/glusterDevVM2-bricks-svg_pg_wed_dev_bkp-brick1-data.pid -S /var/run/gluster/5023d38a22a8a874.socket --brick-name /bricks/svg_pg_wed_dev_bkp/brick1/data -l /var/log/glusterfs/bricks/bricks-svg_pg_wed_dev_bkp-brick1-data.log --xlator-option *-posix.glusterd-uuid=7f6c3023-144b-4db2-9063-d90926dbdd18 --process-name brick --brick-port 49206 --xlator-option svg_pg_wed_dev_bkp-server.listen-port=49206)
[2020-04-27 11:54:23.632870] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 5331
[2020-04-27 11:54:23.636679] I [socket.c:4350:ssl_setup_connection_params] 0-socket.glusterfsd: SSL support for glusterd is ENABLED
[2020-04-27 11:54:23.636745] I [socket.c:4360:ssl_setup_connection_params] 0-socket.glusterfsd: using certificate depth 1
[2020-04-27 11:54:23.637580] I [socket.c:958:__socket_server_bind] 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
[2020-04-27 11:54:23.637932] I [socket.c:4347:ssl_setup_connection_params] 0-glusterfs: SSL support on the I/O path is ENABLED
[2020-04-27 11:54:23.637949] I [socket.c:4350:ssl_setup_connection_params] 0-glusterfs: SSL support for glusterd is ENABLED
[2020-04-27 11:54:23.637960] I [socket.c:4360:ssl_setup_connection_params] 0-glusterfs: using certificate depth 1
[2020-04-27 11:54:23.639324] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2020-04-27 11:54:23.639380] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2020-04-27 11:54:28.933102] E [glusterfsd-mgmt.c:2217:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-04-27 11:54:28.933134] E [glusterfsd-mgmt.c:2416:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:svg_pg_wed_dev_bkp.glusterDevVM2.bricks-svg_pg_wed_dev_bkp-brick1-data)
[2020-04-27 11:54:28.933361] W [glusterfsd.c:1596:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe5d1) [0x7f2b08ec35d1] -->/usr/sbin/glusterfsd(mgmt_getspec_cbk+0x8d0) [0x55d46cb5a110] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x54) [0x55d46cb51ec4] ) 0-: received signum (0), shutting down

I tried to stop the volume but gluster commands are still locked (Another transaction is in progress.).

Best regards,
Nicolas.


De: "Nikhil Ladha" <nladha@xxxxxxxxxx>
À: nico@xxxxxxxxxx
Cc: "gluster-users" <gluster-users@xxxxxxxxxxx>
Envoyé: Lundi 27 Avril 2020 13:34:47
Objet: Re: never ending logging

Hi,
As you mentioned that the node 2 is in "semi-connected" state, I think due to that the locking of volume is failing, and since it is failing in one of the volumes the transaction is not complete and you are seeing a transaction error on another volume.
Moreover, for the repeated logging of lines :
SSL support on the I/O path is enabled, SSL support for glusterd is enabled and using certificate depth 1
If you can try creating a volume without having ssl enabled and then check if the same log messages appear.
Also, if you update to 7.5, and find any change in log message with SSL ENABLED, then please do share that.

Regards
Nikhil Ladha

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux