Hi,
Thank you for the answer and sorry for delay:
2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar@xxxxxxxxxx>:
1. What does the glustershd.log say on all 3 nodes when you run the command? Does it complain anything about these files?
No, glustershd.log is clean, no extra log after command on all 3 nodes
2. Are these 12 files also present in the 3rd data brick?
I've checked right now: all files exists in all 3 nodes
3. Can you provide the output of `gluster volume info` for the this volume?
Volume Name: engineType: ReplicateVolume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515Status: StartedSnapshot Count: 0Number of Bricks: 1 x 3 = 3Transport-type: tcpBricks:Brick1: node01:/gluster/engine/brickBrick2: node02:/gluster/engine/brickBrick3: node04:/gluster/engine/brickOptions Reconfigured:nfs.disable: onperformance.readdir-ahead: ontransport.address-family: inetstorage.owner-uid: 36performance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offperformance.low-prio-threads: 32network.remote-dio: offcluster.eager-lock: enablecluster.quorum-type: autocluster.server-quorum-type: servercluster.data-self-heal-algorithm: fullcluster.locking-scheme: granularcluster.shd-max-threads: 8cluster.shd-wait-qlength: 10000features.shard: onuser.cifs: offstorage.owner-gid: 36features.shard-block-size: 512MBnetwork.ping-timeout: 30performance.strict-o-direct: oncluster.granular-entry-heal: onauth.allow: *
server.allow-insecure: on
Some extra info:
We have recently changed the gluster from: 2 (full repliacated) + 1 arbiter to 3 full replicated cluster
Just curious, how did you do this? `remove-brick` of arbiter brick followed by an `add-brick` to increase to replica-3?
Yes
#gluster volume remove-brick engine replica 2 node03:/gluster/data/brick force (OK!)
#gluster volume heal engine info (no entries!)
#gluster volume add-brick engine replica 3 node04:/gluster/engine/brick (OK!)
After some minutes
[root@node01 ~]# gluster volume heal engine info
Brick node01:/gluster/engine/brick
Status: Connected
Number of entries: 0
Brick node02:/gluster/engine/brick
Status: Connected
Number of entries: 0
Brick node04:/gluster/engine/brick
Status: Connected
Number of entries: 0
Thanks,
Ravi
Another extra info (I don't know if this can be the problem): Five days ago A black out has suddenly shut down the networks switch (also gluster network) of node 03 and 04 ... But I don't know this problem is in place after this black out
Thank you!
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users