Brick offline after upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

We have a single node/brick GlusterFS test system which unfortunately had GlusterFS upgraded from version 5 to 6 while the GlusterFS processes were still running. I know this is not what the "Generic Upgrade procedure" recommends.

Following a restart the brick is not online, and we can't see any error message explaining exactly why. Would anyone have an idea of where to look?

Since the logs from the time of the upgrade and reboot are a bit lengthy I've attached them in a text file.

Thank you in advance for any advice!

--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
Current status of volume:

# gluster volume info gvol0
 
Volume Name: gvol0
Type: Distribute
Volume ID: 33ed309b-0e63-4f9a-8132-ab1b0fdcbc36
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: caes8:/nodirectwritedata/gluster/gvol0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on


# gluster volume status
Status of volume: gvol0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick caes8:/nodirectwritedata/gluster/gvol
0                                           N/A       N/A        N       N/A  
 
Task Status of Volume gvol0
------------------------------------------------------------------------------
There are no active volume tasks


# gluster volume heal gvol0 info
gvol0: Not able to fetch volfile from glusterd
Volume heal failed.

# gluster volume heal gvol0                                                                                                              
Launching heal operation to perform index self heal on volume gvol0 has been unsuccessful:
Self-heal-daemon is disabled. Heal will not be triggered on volume gvol0


From /var/log/glusterfs/glusterd.log at the time of the upgrade:

[2021-03-18 23:52:41.972352] W [glusterfsd.c:1500:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f5bb21a66db] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xfd) [0x55f247b24a1d] -->/usr/sbin/glusterd(cleanup_and_exit+0x54) [0x55f247b24874] ) 0-: received signum (15), shutting down
[2021-03-18 23:52:43.479004] I [MSGID: 100030] [glusterfsd.c:2847:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 6.10 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2021-03-18 23:52:43.479669] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 24762
[2021-03-18 23:52:43.486078] I [MSGID: 106478] [glusterd.c:1422:init] 0-management: Maximum allowed open file descriptors set to 65536
[2021-03-18 23:52:43.486126] I [MSGID: 106479] [glusterd.c:1478:init] 0-management: Using /var/lib/glusterd as working directory
[2021-03-18 23:52:43.486143] I [MSGID: 106479] [glusterd.c:1484:init] 0-management: Using /var/run/gluster as pid file working directory
[2021-03-18 23:52:43.490527] I [socket.c:1022:__socket_server_bind] 0-socket.management: process started listening on port (24007)
[2021-03-18 23:52:43.492110] W [MSGID: 103071] [rdma.c:4472:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2021-03-18 23:52:43.492158] W [MSGID: 103055] [rdma.c:4782:init] 0-rdma.management: Failed to initialize IB Device
[2021-03-18 23:52:43.492174] W [rpc-transport.c:363:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2021-03-18 23:52:43.492343] W [rpcsvc.c:1985:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2021-03-18 23:52:43.492353] E [MSGID: 106244] [glusterd.c:1785:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2021-03-18 23:52:43.493660] I [socket.c:965:__socket_server_bind] 0-socket.management: closing (AF_UNIX) reuse check socket 12
[2021-03-18 23:52:43.494024] I [MSGID: 106059] [glusterd.c:1865:init] 0-management: max-port override: 60999
[2021-03-18 23:52:45.450424] I [MSGID: 106513] [glusterd-store.c:2394:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 40000
[2021-03-18 23:52:45.509733] I [MSGID: 106544] [glusterd.c:152:glusterd_uuid_init] 0-management: retrieved UUID: 6435ac29-b5ab-48a6-91e2-48ba4fbf7d89
[2021-03-18 23:52:45.511294] I [MSGID: 106194] [glusterd-store.c:4108:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
[2021-03-18 23:52:45.511326] E [MSGID: 101032] [store.c:447:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.upgrade. [No such file or directory]
[2021-03-18 23:52:45.511343] I [glusterd.c:1999:init] 0-management: Regenerating volfiles due to a max op-version mismatch or glusterd.upgrade file not being present, op_version retrieved:0, max op_version: 60000
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 1024
  8:     option max-port 60999
  9:     option event-threads 1
 10:     option ping-timeout 0
 11:     option transport.rdma.listen-port 24008
 12:     option transport.socket.listen-port 24007
 13:     option transport.socket.read-fail-log off
 14:     option transport.socket.keepalive-interval 2
 15:     option transport.socket.keepalive-time 10
 16:     option transport-type rdma
 17:     option working-directory /var/lib/glusterd
 18: end-volume
 19:  
+------------------------------------------------------------------------------+
[2021-03-18 23:52:45.523970] I [glusterd-utils.c:6227:glusterd_brick_start] 0-management: discovered already-running brick /nodirectwritedata/gluster/gvol0
[2021-03-18 23:52:45.523991] I [MSGID: 101190] [event-epoll.c:688:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2021-03-18 23:52:45.524068] I [MSGID: 106142] [glusterd-pmap.c:290:pmap_registry_bind] 0-pmap: adding brick /nodirectwritedata/gluster/gvol0 on port 49152
[2021-03-18 23:52:45.524146] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2021-03-18 23:52:45.528420] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600
[2021-03-18 23:52:45.528501] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped
[2021-03-18 23:52:45.528518] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: nfs service is stopped
[2021-03-18 23:52:45.529002] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600
[2021-03-18 23:52:45.529068] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: glustershd already stopped
[2021-03-18 23:52:45.529083] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is stopped
[2021-03-18 23:52:45.529112] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600
[2021-03-18 23:52:45.529209] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped
[2021-03-18 23:52:45.529221] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: quotad service is stopped
[2021-03-18 23:52:45.529250] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600
[2021-03-18 23:52:45.529349] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped
[2021-03-18 23:52:45.529361] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is stopped
[2021-03-18 23:52:45.529386] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600
[2021-03-18 23:52:45.529475] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped
[2021-03-18 23:52:45.529486] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is stopped
[2021-03-18 23:52:45.608325] I [MSGID: 106327] [glusterd-geo-rep.c:2686:glusterd_get_statefile_name] 0-management: Using passed config template(/var/lib/glusterd/geo-replication/gvol0_nves6.xxx.com_gvol1/gsyncd.conf).
[2021-03-18 23:52:45.874904] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2021-03-18 23:52:45.875152] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2021-03-18 23:52:45.876552] I [MSGID: 106495] [glusterd-handler.c:3155:__glusterd_handle_getwd] 0-glusterd: Received getwd req
[2021-03-18 23:52:45.968554] I [MSGID: 106495] [glusterd-handler.c:3155:__glusterd_handle_getwd] 0-glusterd: Received getwd req


And then from /var/log/glusterfs/glusterd.log after the reboot:

[2021-03-19 00:06:16.094332] I [MSGID: 100030] [glusterfsd.c:2847:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 6.10 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2021-03-19 00:06:16.096151] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 1368
[2021-03-19 00:06:16.149339] I [MSGID: 106478] [glusterd.c:1422:init] 0-management: Maximum allowed open file descriptors set to 65536
[2021-03-19 00:06:16.149368] I [MSGID: 106479] [glusterd.c:1478:init] 0-management: Using /var/lib/glusterd as working directory
[2021-03-19 00:06:16.149376] I [MSGID: 106479] [glusterd.c:1484:init] 0-management: Using /var/run/gluster as pid file working directory
[2021-03-19 00:06:16.154018] I [socket.c:1022:__socket_server_bind] 0-socket.management: process started listening on port (24007)
[2021-03-19 00:06:16.185620] W [MSGID: 103071] [rdma.c:4472:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2021-03-19 00:06:16.185666] W [MSGID: 103055] [rdma.c:4782:init] 0-rdma.management: Failed to initialize IB Device
[2021-03-19 00:06:16.185689] W [rpc-transport.c:363:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2021-03-19 00:06:16.185753] W [rpcsvc.c:1985:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2021-03-19 00:06:16.185763] E [MSGID: 106244] [glusterd.c:1785:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2021-03-19 00:06:16.188255] I [socket.c:965:__socket_server_bind] 0-socket.management: closing (AF_UNIX) reuse check socket 12
[2021-03-19 00:06:16.189220] I [MSGID: 106059] [glusterd.c:1865:init] 0-management: max-port override: 60999
[2021-03-19 00:06:17.545711] I [MSGID: 106513] [glusterd-store.c:2394:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 40000
[2021-03-19 00:06:17.593889] I [MSGID: 106544] [glusterd.c:152:glusterd_uuid_init] 0-management: retrieved UUID: 6435ac29-b5ab-48a6-91e2-48ba4fbf7d89
[2021-03-19 00:06:17.625805] I [MSGID: 106194] [glusterd-store.c:4108:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 1024
  8:     option max-port 60999
  9:     option event-threads 1
 10:     option ping-timeout 0
 11:     option transport.rdma.listen-port 24008
 12:     option transport.socket.listen-port 24007
 13:     option transport.socket.read-fail-log off
 14:     option transport.socket.keepalive-interval 2
 15:     option transport.socket.keepalive-time 10
 16:     option transport-type rdma
 17:     option working-directory /var/lib/glusterd
 18: end-volume
 19:  
+------------------------------------------------------------------------------+
[2021-03-19 00:06:17.626896] I [glusterd-utils.c:6314:glusterd_brick_start] 0-management: starting a fresh brick process for brick /nodirectwritedata/gluster/gvol0
[2021-03-19 00:06:17.627014] I [MSGID: 101190] [event-epoll.c:688:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2021-03-19 00:06:17.628412] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2021-03-19 00:06:17.642002] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600
[2021-03-19 00:06:17.642058] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped
[2021-03-19 00:06:17.642072] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: nfs service is stopped
[2021-03-19 00:06:17.643147] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600
[2021-03-19 00:06:17.643206] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: glustershd already stopped
[2021-03-19 00:06:17.643217] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is stopped
[2021-03-19 00:06:17.643238] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600
[2021-03-19 00:06:17.643304] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped
[2021-03-19 00:06:17.643314] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: quotad service is stopped
[2021-03-19 00:06:17.643334] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600
[2021-03-19 00:06:17.643402] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped
[2021-03-19 00:06:17.643411] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is stopped
[2021-03-19 00:06:17.643431] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600
[2021-03-19 00:06:17.643491] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped
[2021-03-19 00:06:17.643501] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is stopped
[2021-03-19 00:06:17.699249] I [MSGID: 106327] [glusterd-geo-rep.c:2686:glusterd_get_statefile_name] 0-management: Using passed config template(/var/lib/glusterd/geo-replication/gvol0_nves6.xxx.com_gvol1/gsyncd.conf).
[2021-03-19 00:06:17.874255] I [MSGID: 106495] [glusterd-handler.c:3155:__glusterd_handle_getwd] 0-glusterd: Received getwd req


From /var/log/glusterfs/bricks/nodirectwritedata-gluster-gvol0.log at the time of the upgrade:

[2021-03-18 22:06:35.486712] I [addr.c:54:compare_addr_and_update] 0-/nodirectwritedata/gluster/gvol0: allowed = "*", received addr = "xx.xx.xx.108"
[2021-03-18 22:06:35.486758] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 5744050c-9447-4509-887f-f05f40250857
[2021-03-18 22:06:35.486775] I [MSGID: 115029] [server-handshake.c:539:server_setvolume] 0-gvol0-server: accepted client from CTX_ID:1f5ae9c9-d6fa-48a4-863a-4a2099da192e-GRAPH_ID:0-PID:17164-HOST:caes8.xxx.com-PC_NAME:gvol0-client-0-RECON_NO:-0 (version: 5.13)
[2021-03-18 23:52:41.977510] W [socket.c:719:__socket_rwv] 0-glusterfs: readv on xx.xx.xx.108:24007 failed (No data available)
[2021-03-18 23:52:41.977589] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: caes8
[2021-03-18 23:52:41.977604] I [glusterfsd-mgmt.c:2444:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers
[2021-03-18 23:52:52.084483] I [rpcsvc.c:2507:rpcsvc_set_outstanding_rpc_limit] 2-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2021-03-18 23:52:52.084671] E [socket.c:901:__socket_server_bind] 2-tcp.gvol0-server: binding to  failed: Address already in use
[2021-03-18 23:52:52.084690] E [socket.c:903:__socket_server_bind] 2-tcp.gvol0-server: Port is already in use
[2021-03-18 23:52:52.084724] W [rpcsvc.c:1795:rpcsvc_create_listener] 2-rpc-service: listening on transport failed
[2021-03-18 23:52:52.084738] W [MSGID: 115045] [server.c:1077:server_init] 2-gvol0-server: creation of listener failed
[2021-03-18 23:52:52.084754] E [MSGID: 101019] [xlator.c:715:xlator_init] 0-gvol0-server: Initialization of volume 'gvol0-server' failed, review your volfile again
[2021-03-18 23:52:52.084768] E [MSGID: 101066] [graph.c:362:glusterfs_graph_init] 0-gvol0-server: initializing translator failed
[2021-03-18 23:52:52.084780] E [MSGID: 101176] [graph.c:725:glusterfs_graph_activate] 0-graph: init failed
[2021-03-18 23:52:52.193408] I [addr.c:54:compare_addr_and_update] 0-/nodirectwritedata/gluster/gvol0: allowed = "*", received addr = "xx.xx.xx.108"
[2021-03-18 23:52:52.193462] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 5744050c-9447-4509-887f-f05f40250857
[2021-03-18 23:52:52.193482] I [MSGID: 115029] [server-handshake.c:539:server_setvolume] 0-gvol0-server: accepted client from CTX_ID:1f5ae9c9-d6fa-48a4-863a-4a2099da192e-GRAPH_ID:2-PID:17164-HOST:caes8.xxx.com-PC_NAME:gvol0-client-0-RECON_NO:-0 (version: 5.13)
[2021-03-18 23:52:52.916136] I [addr.c:54:compare_addr_and_update] 0-/nodirectwritedata/gluster/gvol0: allowed = "*", received addr = "xx.xx.xx.108"
[2021-03-18 23:52:52.916180] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 5744050c-9447-4509-887f-f05f40250857
[2021-03-18 23:52:52.916201] I [MSGID: 115029] [server-handshake.c:539:server_setvolume] 0-gvol0-server: accepted client from CTX_ID:279c3d0b-8873-47cf-8dee-c49530c213de-GRAPH_ID:2-PID:2654-HOST:caes8.xxx.com-PC_NAME:gvol0-client-0-RECON_NO:-0 (version: 5.13)


And from /var/log/glusterfs/bricks/nodirectwritedata-gluster-gvol0.log after the reboot:

[2021-03-19 00:06:17.630066] I [MSGID: 100030] [glusterfsd.c:2847:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.10 (args: /usr/sbin/glusterfsd -s caes8 --volfile-id gvol0.caes8.nodirectwritedata-gluster-gvol0 -p /var/run/gluster/vols/gvol0/caes8-nodirectwritedata-gluster-gvol0.pid -S /var/run/gluster/23b34390495beb0b.socket --brick-name /nodirectwritedata/gluster/gvol0 -l /var/log/glusterfs/bricks/nodirectwritedata-gluster-gvol0.log --xlator-option *-posix.glusterd-uuid=6435ac29-b5ab-48a6-91e2-48ba4fbf7d89 --process-name brick --brick-port 49152 --xlator-option gvol0-server.listen-port=49152)
[2021-03-19 00:06:17.630619] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 1593
[2021-03-19 00:06:17.632915] I [socket.c:965:__socket_server_bind] 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
[2021-03-19 00:06:17.639327] E [socket.c:3626:socket_connect] 0-glusterfs: connection attempt on  failed, (Network is unreachable)
[2021-03-19 00:06:17.639439] I [MSGID: 101190] [event-epoll.c:688:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2021-03-19 00:06:17.639467] I [MSGID: 101190] [event-epoll.c:688:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2021-03-19 00:06:17.639483] I [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: caes8
[2021-03-19 00:06:17.639491] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers
[2021-03-19 00:06:17.639607] W [glusterfsd.c:1570:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe613) [0x7f40ef94d613] -->/usr/sbin/glusterfsd(+0x12b4f) [0x55e1acea0b4f] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x54) [0x55e1ace95994] ) 0-: received signum (1), shutting down
[2021-03-19 00:06:17.639789] E [socket.c:3626:socket_connect] 0-glusterfs: connection attempt on  failed, (Network is unreachable)
[2021-03-19 00:06:17.639801] W [rpc-clnt.c:1691:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0
[2021-03-19 00:06:17.639832] W [glusterfsd.c:1570:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe613) [0x7f40ef94d613] -->/usr/sbin/glusterfsd(+0x12b4f) [0x55e1acea0b4f] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x54) [0x55e1ace95994] ) 0-: received signum (1), shutting down


________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux