Re: Strange Logs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes,

You are right. Ovirt might be the culprit.
The logs looked like errors or "unable to do things" to me.

Seems this is all okay, huh.

Thanks for replying and clearing that up.

-Chris.

On 14/02/2020 22:17, Artem Russakovskii wrote:
I've been seeing the same thing happen, and in our case, it's because of running a script that checks gluster from time to time (https://github.com/jtopjian/scripts/blob/master/gluster/gluster-status.sh in our case).

Do you have a job that runs and periodically checks for gluster health?

Sincerely,
Artem

--
Founder, Android Police <http://www.androidpolice.com>, APK Mirror <http://www.apkmirror.com/>, Illogical Robot LLC
beerpla.net <http://beerpla.net/> | @ArtemR <http://twitter.com/ArtemR>


On Fri, Feb 14, 2020 at 3:10 AM Christian Reiss <email@xxxxxxxxxxxxxxxxxx <mailto:email@xxxxxxxxxxxxxxxxxx>> wrote:

    Hey folks,

    my logs are constantly (every few secs, continuously) swamped with

    [2020-02-14 11:05:20.258542] I [MSGID: 114046]
    [client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-0:
    Connected to ssd_storage-client-0, attached to remote volume
    '/gluster_bricks/node01.company.com/gluster
    <http://node01.company.com/gluster>'.
    [2020-02-14 11:05:20.258559] I [MSGID: 108005]
    [afr-common.c:5280:__afr_handle_child_up_event]
    0-ssd_storage-replicate-0: Subvolume 'ssd_storage-client-0' came back
    up; going online.
    [2020-02-14 11:05:20.258920] I [rpc-clnt.c:1963:rpc_clnt_reconfig]
    0-ssd_storage-client-2: changing port to 49152 (from 0)
    [2020-02-14 11:05:20.259132] I [socket.c:864:__socket_shutdown]
    0-ssd_storage-client-2: intentional socket shutdown(11)
    [2020-02-14 11:05:20.260010] I [MSGID: 114057]
    [client-handshake.c:1376:select_server_supported_programs]
    0-ssd_storage-client-1: Using Program GlusterFS 4.x v1, Num (1298437),
    Version (400)
    [2020-02-14 11:05:20.261077] I [MSGID: 114046]
    [client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-1:
    Connected to ssd_storage-client-1, attached to remote volume
    '/gluster_bricks/node02.company.com/gluster
    <http://node02.company.com/gluster>'.
    [2020-02-14 11:05:20.261089] I [MSGID: 108002]
    [afr-common.c:5647:afr_notify] 0-ssd_storage-replicate-0: Client-quorum
    is met
    [2020-02-14 11:05:20.262005] I [MSGID: 114057]
    [client-handshake.c:1376:select_server_supported_programs]
    0-ssd_storage-client-2: Using Program GlusterFS 4.x v1, Num (1298437),
    Version (400)
    [2020-02-14 11:05:20.262685] I [MSGID: 114046]
    [client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-2:
    Connected to ssd_storage-client-2, attached to remote volume
    '/gluster_bricks/node03.company.com/gluster
    <http://node03.company.com/gluster>'.
    [2020-02-14 11:05:20.263909] I [MSGID: 108031]
    [afr-common.c:2580:afr_local_discovery_cbk] 0-ssd_storage-replicate-0:
    selecting local read_child ssd_storage-client-0
    [2020-02-14 11:05:20.264124] I [MSGID: 104041]
    [glfs-resolve.c:954:__glfs_active_subvol] 0-ssd_storage: switched to
    graph 6e6f6465-3031-2e64-632d-6475732e6461 (0)
       [2020-02-14 11:05:22.407851] I [MSGID: 114007]
    [client.c:2478:client_check_remote_host] 0-ssd_storage-snapd-client:
    Remote host is not set. Assuming the volfile server as remote host
    [Invalid argument]
    [2020-02-14 11:05:22.409711] I [MSGID: 104045]
    [glfs-master.c:80:notify]
    0-gfapi: New graph 6e6f6465-3031-2e64-632d-6475732e6461 (0) coming up
    [2020-02-14 11:05:22.409738] I [MSGID: 114020] [client.c:2436:notify]
    0-ssd_storage-client-0: parent translators are ready, attempting
    connect
    on transport
    [2020-02-14 11:05:22.412949] I [MSGID: 114020] [client.c:2436:notify]
    0-ssd_storage-client-1: parent translators are ready, attempting
    connect
    on transport
    [2020-02-14 11:05:22.413130] I [rpc-clnt.c:1963:rpc_clnt_reconfig]
    0-ssd_storage-client-0: changing port to 49152 (from 0)
    [2020-02-14 11:05:22.413154] I [socket.c:864:__socket_shutdown]
    0-ssd_storage-client-0: intentional socket shutdown(10)
    [2020-02-14 11:05:22.415534] I [MSGID: 114020] [client.c:2436:notify]
    0-ssd_storage-client-2: parent translators are ready, attempting
    connect
    on transport
    [2020-02-14 11:05:22.417836] I [MSGID: 114057]
    [client-handshake.c:1376:select_server_supported_programs]
    0-ssd_storage-client-0: Using Program GlusterFS 4.x v1, Num (1298437),
    Version (400)
    [2020-02-14 11:05:22.418036] I [rpc-clnt.c:1963:rpc_clnt_reconfig]
    0-ssd_storage-client-1: changing port to 49152 (from 0)
    [2020-02-14 11:05:22.418095] I [socket.c:864:__socket_shutdown]
    0-ssd_storage-client-1: intentional socket shutdown(12)
    [2020-02-14 11:05:22.420029] I [MSGID: 114020] [client.c:2436:notify]
    0-ssd_storage-snapd-client: parent translators are ready, attempting
    connect on transport
    [2020-02-14 11:05:22.420533] E [MSGID: 101075]
    [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed
    (family:2) (Name or service not known)
    [2020-02-14 11:05:22.420545] E
    [name.c:266:af_inet_client_get_remote_sockaddr]
    0-ssd_storage-snapd-client: DNS resolution failed on host
    /var/run/glusterd.socket
    Final graph:
    +------------------------------------------------------------------------------+
        1: volume ssd_storage-client-0
        2:     type protocol/client
        3:     option opversion 70000
        4:     option clnt-lk-version 1
        5:     option volfile-checksum 0
        6:     option volfile-key ssd_storage
        7:     option client-version 7.0
        8:     option process-name gfapi.glfsheal
        9:     option process-uuid
    CTX_ID:50cec79e-6028-4e6f-b8ed-dda9db36b2d0-GRAPH_ID:0-PID:24926-HOST:node01.company.com-PC_NAME:ssd_storage-client-0-RECON_NO:-0
       10:     option fops-version 1298437
       11:     option ping-timeout 42
       12:     option remote-host node01.company.com
    <http://node01.company.com>
       13:     option remote-subvolume
    /gluster_bricks/node01.company.com/gluster
    <http://node01.company.com/gluster>
       14:     option transport-type socket
       15:     option transport.address-family inet
       16:     option username 96bcf4d4-932f-4654-86c3-470a081d5021
       17:     option password 069e7ee9-b17d-4228-a612-b0f33588a9ec
       18:     option transport.socket.ssl-enabled off
       19:     option transport.tcp-user-timeout 0
       20:     option transport.socket.keepalive-time 20
       21:     option transport.socket.keepalive-interval 2
       22:     option transport.socket.keepalive-count 9
       23:     option send-gids true
       24: end-volume
       25:
       26: volume ssd_storage-client-1
       27:     type protocol/client
       28:     option ping-timeout 42
       29:     option remote-host node02.company.com
    <http://node02.company.com>
       30:     option remote-subvolume
    /gluster_bricks/node02.company.com/gluster
    <http://node02.company.com/gluster>
       31:     option transport-type socket
       32:     option transport.address-family inet
       33:     option username 96bcf4d4-932f-4654-86c3-470a081d5021
       34:     option password 069e7ee9-b17d-4228-a612-b0f33588a9ec
       35:     option transport.socket.ssl-enabled off
       36:     option transport.tcp-user-timeout 0
       37:     option transport.socket.keepalive-time 20
       38:     option transport.socket.keepalive-interval 2
       39:     option transport.socket.keepalive-count 9
       40:     option send-gids true
       41: end-volume
       42:
       43: volume ssd_storage-client-2
       44:     type protocol/client
       45:     option ping-timeout 42
       46:     option remote-host node03.company.com
    <http://node03.company.com>
       47:     option remote-subvolume
    /gluster_bricks/node03.company.com/gluster
    <http://node03.company.com/gluster>
       48:     option transport-type socket
       49:     option transport.address-family inet
       50:     option username 96bcf4d4-932f-4654-86c3-470a081d5021
       51:     option password 069e7ee9-b17d-4228-a612-b0f33588a9ec
       52:     option transport.socket.ssl-enabled off
       53:     option transport.tcp-user-timeout 0
       54:     option transport.socket.keepalive-time 20
       55:     option transport.socket.keepalive-interval 2
       56:     option transport.socket.keepalive-count 9
       57:     option send-gids true
       58: end-volume
       59:
       60: volume ssd_storage-replicate-0
       61:     type cluster/replicate
       62:     option background-self-heal-count 0
       63:     option afr-pending-xattr
    ssd_storage-client-0,ssd_storage-client-1,ssd_storage-client-2
       64:     option metadata-self-heal on
       65:     option data-self-heal on
       66:     option entry-self-heal on
       67:     option data-self-heal-algorithm full
       68:     option use-compound-fops off
       69:     subvolumes ssd_storage-client-0 ssd_storage-client-1
    ssd_storage-client-2
       70: end-volume
       71:
       72: volume ssd_storage-dht
       73:     type cluster/distribute
       74:     option readdir-optimize on
       75:     option lock-migration off
       76:     option force-migration off
       77:     subvolumes ssd_storage-replicate-0
       78: end-volume
       79:
       80: volume ssd_storage-utime
       81:     type features/utime
       82:     option noatime on
       83:     subvolumes ssd_storage-dht
       84: end-volume
       85:
       86: volume ssd_storage-write-behind
       87:     type performance/write-behind
       88:     subvolumes ssd_storage-utime
       89: end-volume
       90:
       91: volume ssd_storage-read-ahead
       92:     type performance/read-ahead
       93:     subvolumes ssd_storage-write-behind
       94: end-volume
       95:
       96: volume ssd_storage-readdir-ahead
       97:     type performance/readdir-ahead
       98:     option parallel-readdir off
       99:     option rda-request-size 131072
    100:     option rda-cache-limit 10MB
    101:     subvolumes ssd_storage-read-ahead
    102: end-volume
    103:
    104: volume ssd_storage-io-cache
    105:     type performance/io-cache
    106:     subvolumes ssd_storage-readdir-ahead
    107: end-volume
    108:
    109: volume ssd_storage-open-behind
    110:     type performance/open-behind
    111:     subvolumes ssd_storage-io-cache
    112: end-volume
    113:
    114: volume ssd_storage-quick-read
    115:     type performance/quick-read
    116:     subvolumes ssd_storage-open-behind
    117: end-volume
    118:
    119: volume ssd_storage-md-cache
    120:     type performance/md-cache
    121:     subvolumes ssd_storage-quick-read
    122: end-volume
    123:
    124: volume ssd_storage-snapd-client
    125:     type protocol/client
    126:     option remote-host /var/run/glusterd.socket
    127:     option ping-timeout 42
    128:     option remote-subvolume snapd-ssd_storage
    129:     option transport-type socket
    130:     option transport.address-family inet
    131:     option username 96bcf4d4-932f-4654-86c3-470a081d5021
    132:     option password 069e7ee9-b17d-4228-a612-b0f33588a9ec
    133:     option transport.socket.ssl-enabled off
    134:     option transport.tcp-user-timeout 0
    135:     option transport.socket.keepalive-time 20
    136:     option transport.socket.keepalive-interval 2
    137:     option transport.socket.keepalive-count 9
    138:     option send-gids true
    139: end-volume
    140:
    141: volume ssd_storage-snapview-client
    142:     type features/snapview-client
    143:     option snapshot-directory .snaps
    144:     option show-snapshot-directory on
    145:     subvolumes ssd_storage-md-cache ssd_storage-snapd-client
    146: end-volume
    147:
    148: volume ssd_storage
    149:     type debug/io-stats
    150:     option log-level INFO
    151:     option threads 16
    152:     option latency-measurement off
    153:     option count-fop-hits off
    154:     option global-threading off
    155:     subvolumes ssd_storage-snapview-client
    156: end-volume
    157:
    158: volume meta-autoload
    159:     type meta
    160:     subvolumes ssd_storage
    161: end-volume
    162:
    +------------------------------------------------------------------------------+
    [2020-02-14 11:05:22.421366] I [MSGID: 114046]
    [client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-0:
    Connected to ssd_storage-client-0, attached to remote volume
    '/gluster_bricks/node01.company.com/gluster
    <http://node01.company.com/gluster>'.
    [2020-02-14 11:05:22.421379] I [MSGID: 108005]
    [afr-common.c:5280:__afr_handle_child_up_event]
    0-ssd_storage-replicate-0: Subvolume 'ssd_storage-client-0' came back
    up; going online.
    [2020-02-14 11:05:22.421669] I [rpc-clnt.c:1963:rpc_clnt_reconfig]
    0-ssd_storage-client-2: changing port to 49152 (from 0)
    [2020-02-14 11:05:22.421686] I [socket.c:864:__socket_shutdown]
    0-ssd_storage-client-2: intentional socket shutdown(11)
    [2020-02-14 11:05:22.422460] I [MSGID: 114057]
    [client-handshake.c:1376:select_server_supported_programs]
    0-ssd_storage-client-1: Using Program GlusterFS 4.x v1, Num (1298437),
    Version (400)
    [2020-02-14 11:05:22.423377] I [MSGID: 114046]
    [client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-1:
    Connected to ssd_storage-client-1, attached to remote volume
    '/gluster_bricks/node02.company.com/gluster
    <http://node02.company.com/gluster>'.
    [2020-02-14 11:05:22.423391] I [MSGID: 108002]
    [afr-common.c:5647:afr_notify] 0-ssd_storage-replicate-0: Client-quorum
    is met
    [2020-02-14 11:05:22.424586] I [MSGID: 114057]
    [client-handshake.c:1376:select_server_supported_programs]
    0-ssd_storage-client-2: Using Program GlusterFS 4.x v1, Num (1298437),
    Version (400)
    [2020-02-14 11:05:22.425323] I [MSGID: 114046]
    [client-handshake.c:1106:client_setvolume_cbk] 0-ssd_storage-client-2:
    Connected to ssd_storage-client-2, attached to remote volume
    '/gluster_bricks/node03.company.com/gluster
    <http://node03.company.com/gluster>'.
    [2020-02-14 11:05:22.426613] I [MSGID: 108031]
    [afr-common.c:2580:afr_local_discovery_cbk] 0-ssd_storage-replicate-0:
    selecting local read_child ssd_storage-client-0
    [2020-02-14 11:05:22.426758] I [MSGID: 104041]
    [glfs-resolve.c:954:__glfs_active_subvol] 0-ssd_storage: switched to
    graph 6e6f6465-3031-2e64-632d-6475732e6461 (0)


    Can you guys make any sense out of this? 5 unsynced entries remain.

-- with kind regards,
    mit freundlichen Gruessen,

    Christian Reiss

    ________

    Community Meeting Calendar:

    APAC Schedule -
    Every 2nd and 4th Tuesday at 11:30 AM IST
    Bridge: https://bluejeans.com/441850968

    NA/EMEA Schedule -
    Every 1st and 3rd Tuesday at 01:00 PM EDT
    Bridge: https://bluejeans.com/441850968

    Gluster-users mailing list
    Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx>
    https://lists.gluster.org/mailman/listinfo/gluster-users


--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss

________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux