Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

Hi Sanju,

Please find requested information (these are latest logs :) ).

I can see only following error messages related to brick "brick_e15c12cceae12c8ab7782dd57cf5b6c1" (on secondnode log)

[2019-01-23 11:50:20.322902] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick
[2019-01-23 11:50:20.322925] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick on port 49165  >> showing running on port but not
[2019-01-23 11:50:20.327557] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped
[2019-01-23 11:50:20.327586] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: nfs service is stopped
[2019-01-23 11:50:20.327604] I [MSGID: 106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed
[2019-01-23 11:50:20.337735] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 69525
[2019-01-23 11:50:21.338058] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: glustershd service is stopped
[2019-01-23 11:50:21.338180] I [MSGID: 106567] [glusterd-svc-mgmt.c:203:glusterd_svc_start] 0-management: Starting glustershd service
[2019-01-23 11:50:21.348234] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped
[2019-01-23 11:50:21.348285] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: bitd service is stopped
[2019-01-23 11:50:21.348866] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped
[2019-01-23 11:50:21.348883] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped
[2019-01-23 11:50:22.356502] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605) [0x7fca9e139605] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd
[2019-01-23 11:50:22.368845] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2563) [0x7fca9e139563] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd
 

sh-4.2# gluster volume status vol_3442e86b6d994a14de73f1b8c82cf0b8
Status of volume: vol_3442e86b6d994a14de73f1b8c82cf0b8
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.3.6:/var/lib/heketi/mounts/vg
_ca57f326195c243be2380ce4e42a4191/brick_952
d75fd193c7209c9a81acbc23a3747/brick         49157     0          Y       250
Brick 192.168.3.5:/var/lib/heketi/mounts/vg
_d5f17487744584e3652d3ca943b0b91b/brick_e15
c12cceae12c8ab7782dd57cf5b6c1/brick         N/A       N/A        N       N/A
Brick 192.168.3.15:/var/lib/heketi/mounts/v
g_462ea199185376b03e4b0317363bb88c/brick_17
36459d19e8aaa1dcb5a87f48747d04/brick        49173     0          Y       225
Self-heal Daemon on localhost               N/A       N/A        Y       109550
Self-heal Daemon on 192.168.3.6             N/A       N/A        Y       52557
Self-heal Daemon on 192.168.3.15            N/A       N/A        Y       16946

Task Status of Volume vol_3442e86b6d994a14de73f1b8c82cf0b8
------------------------------------------------------------------------------
There are no active volume tasks


BR
Salam



From:        "Sanju Rakonde" <srakonde@xxxxxxxxxx>
To:        "Shaik Salam" <shaik.salam@xxxxxxx>
Cc:        "Amar Tumballi Suryanarayan" <atumball@xxxxxxxxxx>, "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>, "Murali Kottakota" <murali.kottakota@xxxxxxx>
Date:        01/24/2019 02:32 PM
Subject:        Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands




"External email. Open with Caution"
Shaik,

Sorry to ask this again. What errors are you seeing in glusterd logs? Can you share the latest logs?

On Thu, Jan 24, 2019 at 2:05 PM Shaik Salam <shaik.salam@xxxxxxx> wrote:
Hi Sanju,

Please find requsted information.


Are you still seeing the error "Unable to read pidfile:" in glusterd log?
 >>>>  No
Are you seeing "brick is deemed not to be a part of the volume" error in glusterd log?
>>>> No

sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick

sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae1^C8ab7782dd57cf5b6c1/brick

sh-4.2# pwd

/var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick

sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick

sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick/

sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick/

sh-4.2# getfattr -m -d -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick/

sh-4.2# getfattr -d -m . -e hex /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick/

getfattr: Removing leading '/' from absolute path names

# file: var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick/

security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000

trusted.afr.dirty=0x000000000000000000000000

trusted.afr.vol_3442e86b6d994a14de73f1b8c82cf0b8-client-0=0x000000000000000000000000

trusted.gfid=0x00000000000000000000000000000001

trusted.glusterfs.dht=0x000000010000000000000000ffffffff

trusted.glusterfs.volume-id=0x15477f3622e84757a0ce9000b63fa849


sh-4.2# ls -la |wc -l

86

sh-4.2# pwd

/var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick

sh-4.2#




From:        
"Sanju Rakonde" <srakonde@xxxxxxxxxx>
To:        
"Shaik Salam" <shaik.salam@xxxxxxx>
Cc:        
"Amar Tumballi Suryanarayan" <atumball@xxxxxxxxxx>, "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>, "Murali Kottakota" <murali.kottakota@xxxxxxx>
Date:        
01/24/2019 01:38 PM
Subject:        
Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands




"External email. Open with Caution"

Shaik,

Previously I was suspecting, whether brick pid file is missing. But I see it is present.

>From second node (this brick is in offline state):
/var/run/gluster/vols/vol_3442e86b6d994a14de73f1b8c82cf0b8/192.168.3.5-var-lib-heketi-mounts-vg_d5f17487744584e3652d3ca943b0b91b-brick_e15c12cceae12c8ab7782dd57cf5b6c1-brick.pid
271
 Are you still seeing the error "Unable to read pidfile:" in glusterd log?

I also suspect whether brick is missing its extended attributes. Are you seeing "brick is deemed not to be a part of the volume" error in glusterd log? If not can you please provide us output of  "getfattr -m -d -e hex <brickpath>"

On Thu, Jan 24, 2019 at 12:18 PM Shaik Salam <
shaik.salam@xxxxxxx> wrote:
Hi Sanju,


Could you please have look my issue if you have time (atleast provide workaround).


BR

Salam




From:        
Shaik Salam/HYD/TCS
To:        
"Sanju Rakonde" <srakonde@xxxxxxxxxx>
Cc:        
"Amar Tumballi Suryanarayan" <atumball@xxxxxxxxxx>, "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>, "Murali Kottakota" <murali.kottakota@xxxxxxx>
Date:        
01/23/2019 05:50 PM
Subject:        
Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands



   


Hi Sanju,


Please find requested information.


Sorry to repeat again I am trying start force command once brick log enabled to debug by taking one volume example.

Please correct me If I am doing wrong.



[root@master ~]# oc rsh glusterfs-storage-vll7x

sh-4.2# gluster volume info
vol_3442e86b6d994a14de73f1b8c82cf0b8

Volume Name:
vol_3442e86b6d994a14de73f1b8c82cf0b8
Type: Replicate

Volume ID: 15477f36-22e8-4757-a0ce-9000b63fa849

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: 192.168.3.6:/var/lib/heketi/mounts/vg_ca57f326195c243be2380ce4e42a4191/brick_952d75fd193c7209c9a81acbc23a3747/brick

Brick2: 192.168.3.5:/var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/
brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick
Brick3: 192.168.3.15:/var/lib/heketi/mounts/vg_462ea199185376b03e4b0317363bb88c/brick_1736459d19e8aaa1dcb5a87f48747d04/brick

Options Reconfigured:

diagnostics.brick-log-level: INFO

performance.client-io-threads: off

nfs.disable: on

transport.address-family: inet

sh-4.2# gluster volume status vol_3442e86b6d994a14de73f1b8c82cf0b8

Status of volume: vol_3442e86b6d994a14de73f1b8c82cf0b8

Gluster process                             TCP Port  RDMA Port  Online  Pid

------------------------------------------------------------------------------

Brick 192.168.3.6:/var/lib/heketi/mounts/vg

_ca57f326195c243be2380ce4e42a4191/brick_952

d75fd193c7209c9a81acbc23a3747/brick         49157     0          Y       250

Brick 192.168.3.5:/var/lib/heketi/mounts/vg

_d5f17487744584e3652d3ca943b0b91b/brick_e15

c12cceae12c8ab7782dd57cf5b6c1/brick         N/A       N/A        N       N/A

Brick 192.168.3.15:/var/lib/heketi/mounts/v

g_462ea199185376b03e4b0317363bb88c/brick_17

36459d19e8aaa1dcb5a87f48747d04/brick        49173     0          Y       225

Self-heal Daemon on localhost               N/A       N/A        Y       108434

Self-heal Daemon on matrix1.matrix.orange.l

ab                                          N/A       N/A        Y       69525

Self-heal Daemon on matrix2.matrix.orange.l

ab                                          N/A       N/A        Y       18569


gluster volume set
vol_3442e86b6d994a14de73f1b8c82cf0b8 diagnostics.brick-log-level DEBUG
volume set: success

sh-4.2# gluster volume get vol_3442e86b6d994a14de73f1b8c82cf0b8 all |grep log

cluster.entry-change-log                on

cluster.data-change-log                 on

cluster.metadata-change-log             on

diagnostics.brick-log-level             DEBUG


sh-4.2# cd /var/log/glusterfs/bricks/

sh-4.2# ls -la |grep brick_e15c12cceae12c8ab7782dd57cf5b6c1

-rw-------. 1 root root       0 Jan 20 02:46
                         var-lib-heketi-mounts-vg_d5f17487744584e3652d3ca943b0b91b-brick_e15c12cceae12c8ab7782dd57cf5b6c1-brick.log  >>> Noting in log

-rw-------. 1 root root  189057 Jan 18 09:20 var-lib-heketi-mounts-vg_d5f17487744584e3652d3ca943b0b91b-brick_e15c12cceae12c8ab7782dd57cf5b6c1-brick.log-20190120


[2019-01-23 11:49:32.475956] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605) [0x7fca9e139605] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 -o diagnostics.brick-log-level=DEBUG --gd-workdir=/var/lib/glusterd

[2019-01-23 11:49:32.483191] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605) [0x7fca9e139605] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 -o diagnostics.brick-log-level=DEBUG --gd-workdir=/var/lib/glusterd

[2019-01-23 11:48:59.111292] W [MSGID: 106036] [glusterd-snapshot.c:9514:glusterd_handle_snapshot_fn] 0-management: Snapshot list failed

[2019-01-23 11:50:14.112271] E [MSGID: 106026] [glusterd-snapshot.c:3962:glusterd_handle_snapshot_list] 0-management: Volume (vol_63854b105c40802bdec77290e91858ea) does not exist [Invalid argument]

[2019-01-23 11:50:14.112305] W [MSGID: 106036] [glusterd-snapshot.c:9514:glusterd_handle_snapshot_fn] 0-management: Snapshot list failed

[2019-01-23 11:50:20.322902] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick

[2019-01-23 11:50:20.322925] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c1/brick on port 49165

[2019-01-23 11:50:20.327557] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped

[2019-01-23 11:50:20.327586] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: nfs service is stopped

[2019-01-23 11:50:20.327604] I [MSGID: 106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed

[2019-01-23 11:50:20.337735] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 69525

[2019-01-23 11:50:21.338058] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: glustershd service is stopped

[2019-01-23 11:50:21.338180] I [MSGID: 106567] [glusterd-svc-mgmt.c:203:glusterd_svc_start] 0-management: Starting glustershd service

[2019-01-23 11:50:21.348234] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped

[2019-01-23 11:50:21.348285] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: bitd service is stopped

[2019-01-23 11:50:21.348866] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped

[2019-01-23 11:50:21.348883] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped

[2019-01-23 11:50:22.356502] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605) [0x7fca9e139605] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd

[2019-01-23 11:50:22.368845] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2563) [0x7fca9e139563] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start
--gd-workdir=/var/lib/glusterd


sh-4.2# gluster volume status vol_3442e86b6d994a14de73f1b8c82cf0b8

Status of volume: vol_3442e86b6d994a14de73f1b8c82cf0b8

Gluster process                             TCP Port  RDMA Port  Online  Pid

------------------------------------------------------------------------------

Brick 192.168.3.6:/var/lib/heketi/mounts/vg

_ca57f326195c243be2380ce4e42a4191/brick_952

d75fd193c7209c9a81acbc23a3747/brick         49157     0          Y       250

Brick 192.168.3.5:/var/lib/heketi/mounts/vg

_d5f17487744584e3652d3ca943b0b91b/brick_e15

c12cceae12c8ab7782dd57cf5b6c1/brick         N/A       N/A        N       N/A

Brick 192.168.3.15:/var/lib/heketi/mounts/v

g_462ea199185376b03e4b0317363bb88c/brick_17

36459d19e8aaa1dcb5a87f48747d04/brick        49173     0          Y       225

Self-heal Daemon on localhost               N/A       N/A        Y       109550

Self-heal Daemon on 192.168.3.6             N/A       N/A        Y       52557

Self-heal Daemon on 192.168.3.15            N/A       N/A        Y       16946


Task Status of Volume vol_3442e86b6d994a14de73f1b8c82cf0b8

------------------------------------------------------------------------------

There are no active volume tasks





From:        
"Sanju Rakonde" <srakonde@xxxxxxxxxx>
To:        
"Shaik Salam" <shaik.salam@xxxxxxx>
Cc:        
"Amar Tumballi Suryanarayan" <atumball@xxxxxxxxxx>, "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>, "Murali Kottakota" <murali.kottakota@xxxxxxx>
Date:        
01/23/2019 02:15 PM
Subject:        
Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands




"External email. Open with Caution"

Hi Shaik,

I can see below errors in glusterd logs.

[2019-01-22 09:20:17.540196] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running] 0-: Unable to read pidfile: /var/run/gluster/vols/vol_e1aa1283d5917485d88c4a742eeff422/192.168.3.6-var-lib-heketi-mounts-vg_526f35058433c6b03130bba4e0a7dd87-brick_9e7c382e5f853d471c347bc5590359af-brick.pid
[2019-01-22 09:20:17.546408] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running] 0-: Unable to read pidfile: /var/run/gluster/vols/vol_f0ed498d7e781d7bb896244175b31f9e/192.168.3.6-var-lib-heketi-mounts-vg_56391bec3c8bfe4fc116de7bddfc2af4-brick_47ed9e0663ad0f6f676ddd6ad7e3dcde-brick.pid
[2019-01-22 09:20:17.552575] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running] 0-: Unable to read pidfile: /var/run/gluster/vols/vol_f387519c9b004ec14e80696db88ef0f8/192.168.3.6-var-lib-heketi-mounts-vg_56391bec3c8bfe4fc116de7bddfc2af4-brick_06ad6c73dfbf6a5fc21334f98c9973c2-brick.pid
[2019-01-22 09:20:17.558888] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running] 0-: Unable to read pidfile: /var/run/gluster/vols/vol_f8ca343c60e6efe541fe02d16ca02a7d/192.168.3.6-var-lib-heketi-mounts-vg_526f35058433c6b03130bba4e0a7dd87-brick_525225f65753b05dfe33aeaeb9c5de39-brick.pid
[2019-01-22 09:20:17.565266] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running] 0-: Unable to read pidfile: /var/run/gluster/vols/vol_fe882e074c0512fd9271fc2ff5a0bfe1/192.168.3.6-var-lib-heketi-mounts-vg_28708570b029e5eff0a996c453a11691-brick_d4f30d6e465a8544b759a7016fb5aab5-brick.pid
[2019-01-22 09:20:17.585926] E [MSGID: 106028] [glusterd-utils.c:8222:glusterd_brick_signal] 0-glusterd: Unable to get pid of brick process
[2019-01-22 09:20:17.617806] E [MSGID: 106028] [glusterd-utils.c:8222:glusterd_brick_signal] 0-glusterd: Unable to get pid of brick process
[2019-01-22 09:20:17.649628] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running] 0-: Unable to read pidfile: /var/run/gluster/glustershd/glustershd.pid
[2019-01-22 09:20:17.649700] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running] 0-: Unable to read pidfile: /var/run/gluster/glustershd/glustershd.pid

So it looks like, neither gf_is_service_running() nor glusterd_brick_signal() are able to read the pid file. That means pidfiles might be having nothing to read.

Can you please paste the contents of brick pidfiles. You can find brick pidfiles in /var/run/gluster/vols/<volname>/ or you can just run this command "for i in `ls /var/run/gluster/vols/*/*.pid`;do echo $i;cat $i;done"

On Wed, Jan 23, 2019 at 12:49 PM Shaik Salam <
shaik.salam@xxxxxxx> wrote:
Hi Sanju,


Please find requested information attached logs.



 

Below brick is offline and try to start force/heal commands but doesn't makes up.


sh-4.2#
sh-4.2# gluster --version

glusterfs 4.1.5



sh-4.2# gluster volume status vol_3442e86b6d994a14de73f1b8c82cf0b8

Status of volume: vol_3442e86b6d994a14de73f1b8c82cf0b8

Gluster process                             TCP Port  RDMA Port  Online  Pid

------------------------------------------------------------------------------

Brick 192.168.3.6:/var/lib/heketi/mounts/vg

_ca57f326195c243be2380ce4e42a4191/brick_952

d75fd193c7209c9a81acbc23a3747/brick         49166     0          Y       269

Brick 192.168.3.5:/var/lib/heketi/mounts/vg

_d5f17487744584e3652d3ca943b0b91b/brick_e15

c12cceae12c8ab7782dd57cf5b6c1/brick         N/A       N/A        N       N/A

Brick 192.168.3.15:/var/lib/heketi/mounts/v

g_462ea199185376b03e4b0317363bb88c/brick_17

36459d19e8aaa1dcb5a87f48747d04/brick        49173     0          Y       225

Self-heal Daemon on localhost               N/A       N/A        Y       45826

Self-heal Daemon on 192.168.3.6             N/A       N/A        Y       65196

Self-heal Daemon on 192.168.3.15            N/A       N/A        Y       52915


Task Status of Volume vol_3442e86b6d994a14de73f1b8c82cf0b8

------------------------------------------------------------------------------



We can see following events from when we start forcing volumes


/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605) [0x7fca9e139605] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd

[2019-01-21 08:22:34.555068] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2563) [0x7fca9e139563] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd

[2019-01-21 08:22:53.389049] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_3442e86b6d994a14de73f1b8c82cf0b8

[2019-01-21 08:23:25.346839] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req



We can see following events from when we heal volumes.


[2019-01-21 08:20:07.576070] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0

[2019-01-21 08:20:07.580225] I [cli-rpc-ops.c:9182:gf_cli_heal_volume_cbk] 0-cli: Received resp to heal volume

[2019-01-21 08:20:07.580326] I [input.c:31:cli_batch] 0-: Exiting with: -1

[2019-01-21 08:22:30.423311] I [cli.c:768:main] 0-cli: Started running gluster with version 4.1.5

[2019-01-21 08:22:30.463648] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1

[2019-01-21 08:22:30.463718] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now

[2019-01-21 08:22:30.463859] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0

[2019-01-21 08:22:33.427710] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now

[2019-01-21 08:22:34.581555] I [cli-rpc-ops.c:1472:gf_cli_start_volume_cbk] 0-cli: Received resp to start volume

[2019-01-21 08:22:34.581678] I [input.c:31:cli_batch] 0-: Exiting with: 0

[2019-01-21 08:22:53.345351] I [cli.c:768:main] 0-cli: Started running gluster with version 4.1.5

[2019-01-21 08:22:53.387992] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1

[2019-01-21 08:22:53.388059] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now

[2019-01-21 08:22:53.388138] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0

[2019-01-21 08:22:53.394737] I [input.c:31:cli_batch] 0-: Exiting with: 0

[2019-01-21 08:23:25.304688] I [cli.c:768:main] 0-cli: Started running gluster with version 4.1.5

[2019-01-21 08:23:25.346319] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1

[2019-01-21 08:23:25.346389] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now

[2019-01-21 08:23:25.346500] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0



Enabled DEBUG mode for brick level. But nothing writing to brick log.


gluster volume set vol_3442e86b6d994a14de73f1b8c82cf0b8 diagnostics.brick-log-level DEBUG


sh-4.2# pwd

/var/log/glusterfs/bricks


sh-4.2# ls -la |grep brick_e15c12cceae12c8ab7782dd57cf5b6c1

-rw-------. 1 root root      
0 Jan 20 02:46 var-lib-heketi-mounts-vg_d5f17487744584e3652d3ca943b0b91b-brick_e15c12cceae12c8ab7782dd57cf5b6c1-brick.log






From:        
Sanju Rakonde <srakonde@xxxxxxxxxx>
To:        
Shaik Salam <shaik.salam@xxxxxxx>
Cc:        
Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>, "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
Date:        
01/22/2019 02:21 PM
Subject:        
Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands




"External email. Open with Caution"

Hi Shaik,

Can you please provide us complete glusterd and cmd_history logs from all the nodes in the cluster? Also please paste output of the following commands (from all nodes):
1. gluster --version
2. gluster volume info
3. gluster volume status
4. gluster peer status
5. ps -ax | grep glusterfsd

On Tue, Jan 22, 2019 at 12:47 PM Shaik Salam <
shaik.salam@xxxxxxx> wrote:
Hi Surya,


It is already customer setup and cant redeploy again.

Enabled debug for brick level log but nothing writing to it.

Can you tell me is any other ways to troubleshoot  or logs to look??



From:        
Shaik Salam/HYD/TCS
To:        
"Amar Tumballi Suryanarayan" <atumball@xxxxxxxxxx>
Cc:        
"gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
Date:        
01/22/2019 12:06 PM
Subject:        
Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands



Hi Surya,


I have enabled DEBUG mode for brick level. But nothing writing to brick log.


gluster volume set vol_3442e86b6d994a14de73f1b8c82cf0b8 diagnostics.brick-log-level DEBUG


sh-4.2# pwd

/var/log/glusterfs/bricks


sh-4.2# ls -la |grep brick_e15c12cceae12c8ab7782dd57cf5b6c1

-rw-------. 1 root root      
0 Jan 20 02:46 var-lib-heketi-mounts-vg_d5f17487744584e3652d3ca943b0b91b-brick_e15c12cceae12c8ab7782dd57cf5b6c1-brick.log

BR

Salam





From:        
"Amar Tumballi Suryanarayan" <atumball@xxxxxxxxxx>
To:        
"Shaik Salam" <shaik.salam@xxxxxxx>
Cc:        
"gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
Date:        
01/22/2019 11:38 AM
Subject:        
Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands




"External email. Open with Caution"

Hi Shaik,

Can you check what is there in brick logs? They are located in /var/log/glusterfs/bricks/*? 

Looks like the samba hooks script failed, but that shouldn't matter in this use case.

Also, I see that you are trying to setup heketi to provision volumes, which means you may be using gluster in container usecases. If you are still in 'PoC' phase, can you give 
https://github.com/gluster/gcs a try? That makes the deployment and the stack little simpler.

-Amar




On Tue, Jan 22, 2019 at 11:29 AM Shaik Salam <
shaik.salam@xxxxxxx> wrote:
Can anyone respond how to recover bricks apart from heal/start force according to below events from logs.

Please let me know any other logs required.

Thanks in advance.


BR

Salam




From:        
Shaik Salam/HYD/TCS
To:        
bugs@xxxxxxxxxxx, gluster-users@xxxxxxxxxxx
Date:        
01/21/2019 10:03 PM
Subject:        
Bricks are going offline unable to recover with heal/start force commands



Hi,


Bricks are in offline and  unable to recover with following commands


gluster volume heal <vol-name>


gluster volume start <vol-name> force


But still bricks are offline.



sh-4.2# gluster volume status vol_3442e86b6d994a14de73f1b8c82cf0b8

Status of volume: vol_3442e86b6d994a14de73f1b8c82cf0b8

Gluster process                             TCP Port  RDMA Port  Online  Pid

------------------------------------------------------------------------------

Brick 192.168.3.6:/var/lib/heketi/mounts/vg

_ca57f326195c243be2380ce4e42a4191/brick_952

d75fd193c7209c9a81acbc23a3747/brick         49166     0          Y       269

Brick 192.168.3.5:/var/lib/heketi/mounts/vg

_d5f17487744584e3652d3ca943b0b91b/brick_e15

c12cceae12c8ab7782dd57cf5b6c1/brick         N/A       N/A        N       N/A

Brick 192.168.3.15:/var/lib/heketi/mounts/v

g_462ea199185376b03e4b0317363bb88c/brick_17

36459d19e8aaa1dcb5a87f48747d04/brick        49173     0          Y       225

Self-heal Daemon on localhost               N/A       N/A        Y       45826

Self-heal Daemon on 192.168.3.6             N/A       N/A        Y       65196

Self-heal Daemon on 192.168.3.15            N/A       N/A        Y       52915


Task Status of Volume vol_3442e86b6d994a14de73f1b8c82cf0b8

------------------------------------------------------------------------------



We can see following events from when we start forcing volumes


/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2605) [0x7fca9e139605] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd

[2019-01-21 08:22:34.555068] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2b3a) [0x7fca9e139b3a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe2563) [0x7fca9e139563] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fcaa346f0e5] ) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=vol_3442e86b6d994a14de73f1b8c82cf0b8 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd

[2019-01-21 08:22:53.389049] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_3442e86b6d994a14de73f1b8c82cf0b8

[2019-01-21 08:23:25.346839] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req



We can see following events from when we heal volumes.


[2019-01-21 08:20:07.576070] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0

[2019-01-21 08:20:07.580225] I [cli-rpc-ops.c:9182:gf_cli_heal_volume_cbk] 0-cli: Received resp to heal volume

[2019-01-21 08:20:07.580326] I [input.c:31:cli_batch] 0-: Exiting with: -1

[2019-01-21 08:22:30.423311] I [cli.c:768:main] 0-cli: Started running gluster with version 4.1.5

[2019-01-21 08:22:30.463648] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1

[2019-01-21 08:22:30.463718] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now

[2019-01-21 08:22:30.463859] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0

[2019-01-21 08:22:33.427710] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now

[2019-01-21 08:22:34.581555] I [cli-rpc-ops.c:1472:gf_cli_start_volume_cbk] 0-cli: Received resp to start volume

[2019-01-21 08:22:34.581678] I [input.c:31:cli_batch] 0-: Exiting with: 0

[2019-01-21 08:22:53.345351] I [cli.c:768:main] 0-cli: Started running gluster with version 4.1.5

[2019-01-21 08:22:53.387992] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1

[2019-01-21 08:22:53.388059] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now

[2019-01-21 08:22:53.388138] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0

[2019-01-21 08:22:53.394737] I [input.c:31:cli_batch] 0-: Exiting with: 0

[2019-01-21 08:23:25.304688] I [cli.c:768:main] 0-cli: Started running gluster with version 4.1.5

[2019-01-21 08:23:25.346319] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1

[2019-01-21 08:23:25.346389] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now

[2019-01-21 08:23:25.346500] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0




Please let us know steps to recover bricks.



BR

Salam

=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
_______________________________________________
Bugs mailing list

Bugs@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/bugs


--
Amar Tumballi (amarts)
_______________________________________________
Gluster-users mailing list

Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users


--
Thanks,
Sanju


--
Thanks,
Sanju


--
Thanks,
Sanju



--
Thanks,
Sanju

Attachment: firstnode.log
Description: Binary data

Attachment: secondnode.log
Description: Binary data

Attachment: Thirdnode.log
Description: Binary data

sh-4.2# gluster volume get vol_3442e86b6d994a14de73f1b8c82cf0b8 all
Option                                  Value
------                                  -----
cluster.lookup-unhashed                 on
cluster.lookup-optimize                 on
cluster.min-free-disk                   10%
cluster.min-free-inodes                 5%
cluster.rebalance-stats                 off
cluster.subvols-per-directory           (null)
cluster.readdir-optimize                off
cluster.rsync-hash-regex                (null)
cluster.extra-hash-regex                (null)
cluster.dht-xattr-name                  trusted.glusterfs.dht
cluster.randomize-hash-range-by-gfid    off
cluster.rebal-throttle                  normal
cluster.lock-migration                  off
cluster.force-migration                 off
cluster.local-volume-name               (null)
cluster.weighted-rebalance              on
cluster.switch-pattern                  (null)
cluster.entry-change-log                on
cluster.read-subvolume                  (null)
cluster.read-subvolume-index            -1
cluster.read-hash-mode                  1
cluster.background-self-heal-count      8
cluster.metadata-self-heal              on
cluster.data-self-heal                  on
cluster.entry-self-heal                 on
cluster.self-heal-daemon                on
cluster.heal-timeout                    600
cluster.self-heal-window-size           1
cluster.data-change-log                 on
cluster.metadata-change-log             on
cluster.data-self-heal-algorithm        (null)
cluster.eager-lock                      on
disperse.eager-lock                     on
disperse.other-eager-lock               on
disperse.eager-lock-timeout             1
disperse.other-eager-lock-timeout       1
cluster.quorum-type                     auto
cluster.quorum-count                    (null)
cluster.choose-local                    true
cluster.self-heal-readdir-size          1KB
cluster.post-op-delay-secs              1
cluster.ensure-durability               on
cluster.consistent-metadata             no
cluster.heal-wait-queue-length          128
cluster.favorite-child-policy           none
cluster.full-lock                       yes
cluster.stripe-block-size               128KB
cluster.stripe-coalesce                 true
diagnostics.latency-measurement         off
diagnostics.dump-fd-stats               off
diagnostics.count-fop-hits              off
diagnostics.brick-log-level             INFO
diagnostics.client-log-level            INFO
diagnostics.brick-sys-log-level         CRITICAL
diagnostics.client-sys-log-level        CRITICAL
diagnostics.brick-logger                (null)
diagnostics.client-logger               (null)
diagnostics.brick-log-format            (null)
diagnostics.client-log-format           (null)
diagnostics.brick-log-buf-size          5
diagnostics.client-log-buf-size         5
diagnostics.brick-log-flush-timeout     120
diagnostics.client-log-flush-timeout    120
diagnostics.stats-dump-interval         0
diagnostics.fop-sample-interval         0
diagnostics.stats-dump-format           json
diagnostics.fop-sample-buf-size         65535
diagnostics.stats-dnscache-ttl-sec      86400
performance.cache-max-file-size         0
performance.cache-min-file-size         0
performance.cache-refresh-timeout       1
performance.cache-priority
performance.cache-size                  32MB
performance.io-thread-count             16
performance.high-prio-threads           16
performance.normal-prio-threads         16
performance.low-prio-threads            16
performance.least-prio-threads          1
performance.enable-least-priority       on
performance.iot-watchdog-secs           (null)
performance.iot-cleanup-disconnected-reqsoff
performance.iot-pass-through            false
performance.io-cache-pass-through       false
performance.cache-size                  128MB
performance.qr-cache-timeout            1
performance.cache-invalidation          false
performance.flush-behind                on
performance.nfs.flush-behind            on
performance.write-behind-window-size    1MB
performance.resync-failed-syncs-after-fsyncoff
performance.nfs.write-behind-window-size1MB
performance.strict-o-direct             off
performance.nfs.strict-o-direct         off
performance.strict-write-ordering       off
performance.nfs.strict-write-ordering   off
performance.write-behind-trickling-writeson
performance.aggregate-size              128KB
performance.nfs.write-behind-trickling-writeson
performance.lazy-open                   yes
performance.read-after-open             no
performance.open-behind-pass-through    false
performance.read-ahead-page-count       4
performance.read-ahead-pass-through     false
performance.readdir-ahead-pass-through  false
performance.md-cache-pass-through       false
performance.md-cache-timeout            1
performance.cache-swift-metadata        true
performance.cache-samba-metadata        false
performance.cache-capability-xattrs     true
performance.cache-ima-xattrs            true
performance.md-cache-statfs             off
performance.xattr-cache-list
performance.nl-cache-pass-through       false
features.encryption                     off
encryption.master-key                   (null)
encryption.data-key-size                256
encryption.block-size                   4096
network.frame-timeout                   1800
network.ping-timeout                    42
network.tcp-window-size                 (null)
network.remote-dio                      disable
client.event-threads                    2
client.tcp-user-timeout                 0
client.keepalive-time                   20
client.keepalive-interval               2
client.keepalive-count                  9
network.tcp-window-size                 (null)
network.inode-lru-limit                 16384
auth.allow                              *
auth.reject                             (null)
transport.keepalive                     1
server.allow-insecure                   on
server.root-squash                      off
server.anonuid                          65534
server.anongid                          65534
server.statedump-path                   /var/run/gluster
server.outstanding-rpc-limit            64
server.ssl                              (null)
auth.ssl-allow                          *
server.manage-gids                      off
server.dynamic-auth                     on
client.send-gids                        on
server.gid-timeout                      300
server.own-thread                       (null)
server.event-threads                    1
server.tcp-user-timeout                 0
server.keepalive-time                   20
server.keepalive-interval               2
server.keepalive-count                  9
transport.listen-backlog                1024
ssl.own-cert                            (null)
ssl.private-key                         (null)
ssl.ca-list                             (null)
ssl.crl-path                            (null)
ssl.certificate-depth                   (null)
ssl.cipher-list                         (null)
ssl.dh-param                            (null)
ssl.ec-curve                            (null)
transport.address-family                inet
performance.write-behind                on
performance.read-ahead                  on
performance.readdir-ahead               on
performance.io-cache                    on
performance.quick-read                  on
performance.open-behind                 on
performance.nl-cache                    off
performance.stat-prefetch               on
performance.client-io-threads           off
performance.nfs.write-behind            on
performance.nfs.read-ahead              off
performance.nfs.io-cache                off
performance.nfs.quick-read              off
performance.nfs.stat-prefetch           off
performance.nfs.io-threads              off
performance.force-readdirp              true
performance.cache-invalidation          false
features.uss                            off
features.snapshot-directory             .snaps
features.show-snapshot-directory        off
features.tag-namespaces                 off
network.compression                     off
network.compression.window-size         -15
network.compression.mem-level           8
network.compression.min-size            0
network.compression.compression-level   -1
network.compression.debug               false
features.default-soft-limit             80%
features.soft-timeout                   60
features.hard-timeout                   5
features.alert-time                     86400
features.quota-deem-statfs              off
geo-replication.indexing                off
geo-replication.indexing                off
geo-replication.ignore-pid-check        off
geo-replication.ignore-pid-check        off
features.quota                          off
features.inode-quota                    off
features.bitrot                         disable
debug.trace                             off
debug.log-history                       no
debug.log-file                          no
debug.exclude-ops                       (null)
debug.include-ops                       (null)
debug.error-gen                         off
debug.error-failure                     (null)
debug.error-number                      (null)
debug.random-failure                    off
debug.error-fops                        (null)
nfs.disable                             on
features.read-only                      off
features.worm                           off
features.worm-file-level                off
features.worm-files-deletable           on
features.default-retention-period       120
features.retention-mode                 relax
features.auto-commit-period             180
storage.linux-aio                       off
storage.batch-fsync-mode                reverse-fsync
storage.batch-fsync-delay-usec          0
storage.owner-uid                       -1
storage.owner-gid                       -1
storage.node-uuid-pathinfo              off
storage.health-check-interval           30
storage.build-pgfid                     off
storage.gfid2path                       on
storage.gfid2path-separator             :
storage.reserve                         1
storage.health-check-timeout            10
storage.fips-mode-rchecksum             off
storage.force-create-mode               0000
storage.force-directory-mode            0000
storage.create-mask                     0777
storage.create-directory-mask           0777
storage.max-hardlinks                   100
storage.ctime                           off
storage.bd-aio                          off
config.gfproxyd                         off
cluster.server-quorum-type              off
cluster.server-quorum-ratio             0
changelog.changelog                     off
changelog.changelog-dir                 {{ brick.path }}/.glusterfs/changelogs
changelog.encoding                      ascii
changelog.rollover-time                 15
changelog.fsync-interval                5
changelog.changelog-barrier-timeout     120
changelog.capture-del-path              off
features.barrier                        disable
features.barrier-timeout                120
features.trash                          off
features.trash-dir                      .trashcan
features.trash-eliminate-path           (null)
features.trash-max-filesize             5MB
features.trash-internal-op              off
cluster.enable-shared-storage           disable
cluster.write-freq-threshold            0
cluster.read-freq-threshold             0
cluster.tier-pause                      off
cluster.tier-promote-frequency          120
cluster.tier-demote-frequency           3600
cluster.watermark-hi                    90
cluster.watermark-low                   75
cluster.tier-mode                       cache
cluster.tier-max-promote-file-size      0
cluster.tier-max-mb                     4000
cluster.tier-max-files                  10000
cluster.tier-query-limit                100
cluster.tier-compact                    on
cluster.tier-hot-compact-frequency      604800
cluster.tier-cold-compact-frequency     604800
features.ctr-enabled                    off
features.record-counters                off
features.ctr-record-metadata-heat       off
features.ctr_link_consistency           off
features.ctr_lookupheal_link_timeout    300
features.ctr_lookupheal_inode_timeout   300
features.ctr-sql-db-cachesize           12500
features.ctr-sql-db-wal-autocheckpoint  25000
features.selinux                        on
locks.trace                             off
locks.mandatory-locking                 off
cluster.disperse-self-heal-daemon       enable
cluster.quorum-reads                    no
client.bind-insecure                    (null)
features.shard                          off
features.shard-block-size               64MB
features.scrub-throttle                 lazy
features.scrub-freq                     biweekly
features.scrub                          false
features.expiry-time                    120
features.cache-invalidation             off
features.cache-invalidation-timeout     60
features.leases                         off
features.lease-lock-recall-timeout      60
disperse.background-heals               8
disperse.heal-wait-qlength              128
cluster.heal-timeout                    600
dht.force-readdirp                      on
disperse.read-policy                    gfid-hash
cluster.shd-max-threads                 1
cluster.shd-wait-qlength                1024
cluster.locking-scheme                  full
cluster.granular-entry-heal             no
features.locks-revocation-secs          0
features.locks-revocation-clear-all     false
features.locks-revocation-max-blocked   0
features.locks-monkey-unlocking         false
features.locks-notify-contention        no
features.locks-notify-contention-delay  5
disperse.shd-max-threads                1
disperse.shd-wait-qlength               1024
disperse.cpu-extensions                 auto
disperse.self-heal-window-size          1
cluster.use-compound-fops               off
performance.parallel-readdir            off
performance.rda-request-size            131072
performance.rda-low-wmark               4096
performance.rda-high-wmark              128KB
performance.rda-cache-limit             10MB
performance.nl-cache-positive-entry     false
performance.nl-cache-limit              10MB
performance.nl-cache-timeout            60
cluster.brick-multiplex                 off
cluster.max-bricks-per-process          0
disperse.optimistic-change-log          on
disperse.stripe-cache                   4
cluster.halo-enabled                    False
cluster.halo-shd-max-latency            99999
cluster.halo-nfsd-max-latency           5
cluster.halo-max-latency                5
cluster.halo-max-replicas               99999
cluster.halo-min-replicas               2
debug.delay-gen                         off
delay-gen.delay-percentage              10%
delay-gen.delay-duration                100000
delay-gen.enable
disperse.parallel-writes                on
features.sdfs                           off
features.cloudsync                      off
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux