On March 11, 2015 4:24:09 AM PDT, Alessandro Ipe <Alessandro.Ipe@xxxxxxxx> wrote:
Well, it is even worse. Now when doing a "ls -R" on the volume results in a lot of
[2015-03-11 11:18:31.957505] E [afr-self-heal-common.c:233:afr_sh_print_split_brain_log] 0-md1-replicate-2: Unable to self-heal contents of '/library' (possible split-brain). Please delete the file from all but the preferred subvolume.- Pending matrix: [ [ 0 2 ] [ 1 0 ] ]
[2015-03-11 11:18:31.957692] E [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status] 0-md1-replicate-2: metadata self heal failed, on /library
I am desperate...
A.
On Wednesday 11 March 2015 12:05:33 you wrote:Hi,
When trying to access a file on a gluster client (through fuse), I get an
"Input/output error" message.
Getting the attributes for the file gives me for the first brick
# file: data/glusterfs/md1/brick1/kvm/hail/hail_home.qcow2
trusted.afr.md1-client-2=0sAAAAAAAAAAAAAAAA
trusted.afr.md1-client-3=0sAAABdAAAAAAAAAAA
trusted.gfid=0sOCFPGCdrQ9uyq2yTTPCKqQ==
while for the second (replicate) brick
# file: data/glusterfs/md1/brick1/kvm/hail/hail_home.qcow2
trusted.afr.md1-client-2=0sAAABJAAAAAAAAAAA
trusted.afr.md1-client-3=0sAAAAAAAAAAAAAAAA
trusted.gfid=0sOCFPGCdrQ9uyq2yTTPCKqQ==
It seems that I have a split-brain. How can I solve this issue by resetting
the attributes, please ?
Thanks,
Alessandro.
==================
gluster volume info md1
Volume Name: md1
Type: Distributed-Replicate
Volume ID: 6da4b915-1def-4df4-a41c-2f3300ebf16b
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: tsunami1:/data/glusterfs/md1/brick1
Brick2: tsunami2:/data/glusterfs/md1/brick1
Brick3: tsunami3:/data/glusterfs/md1/brick1
Brick4: tsunami4:/data/glusterfs/md1/brick1
Brick5: tsunami5:/data/glusterfs/md1/brick1
Brick6: tsunami6:/data/glusterfs/md1/brick1
Options Reconfigured:
server.allow-insecure: on
cluster.read-hash-mode: 2
features.quota: off
performance.write-behind: on
performance.write-behind-window-size: 4MB
performance.flush-behind: off
performance.io-thread-count: 64
performance.cache-size: 512MB
nfs.disable: on
cluster.lookup-unhashed: off
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users