Re: Gluster does not seem to detect a split-brain situation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

First, I do not think that you have a split-brain.  I have a split-brain and my `gluster volume heal info` is:
gluster> volume heal vol3 info
Brick node5.virt.local:/storage/brick12/
/5d0bb2f3-f903-4349-b6a5-25b549affe5f/dom_md/ids - Is in split-brain

Number of entries: 1

Brick node6.virt.local:/storage/brick13/
/5d0bb2f3-f903-4349-b6a5-25b549affe5f/dom_md/ids - Is in split-brain

Number of entries: 1

Second, you would setup a quorum. Some info you can sea here. My config is (two server with replica 2):
cluster.server-quorum-type: server
cluster.quorum-type: fixed
cluster.quorum-count: 1
cluster.server-quorum-ratio: 51%


And the last, I am a new in Gluster, so can be wrong.


07.06.2015 20:13, Sjors Gielen пишет:
Hi all,

I work at a small, 8-person company that uses Gluster for its primary data storage. We have a volume called "data" that is replicated over two servers (details below). This worked perfectly for over a year, but lately we've been noticing some mismatches between the two bricks, so it seems there has been some split-brain situation that is not being detected or resolved. I have two questions about this:

1) I expected Gluster to (eventually) detect a situation like this; why doesn't it?
2) How do I fix this situation? I've tried an explicit 'heal', but that didn't seem to change anything.

Thanks a lot for your help!
Sjors

------8<------

Volume & peer info: http://pastebin.com/PN7tRXdU
curacao# md5sum /export/sdb1/data/Case/21000355/studies.dat
7bc2daec6be953ffae920d81fe6fa25c
/export/sdb1/data/Case/21000355/studies.dat
bonaire# md5sum /export/sdb1/data/Case/21000355/studies.dat
28c950a1e2a5f33c53a725bf8cd72681 /export/sdb1/data/Case/21000355/studies.dat

# mallorca is one of the clients
mallorca# md5sum /data/Case/21000355/studies.dat
7bc2daec6be953ffae920d81fe6fa25c  /data/Case/21000355/studies.dat

I expected an input/output error after reading this file, because of the split-brain situation, but got none. There are no entries in the GlusterFS logs of either bonaire or curacao.

bonaire# gluster volume heal data full
Launching heal operation to perform full self heal on volume data has been successful
Use heal info commands to check status
bonaire# gluster volume heal data info
Brick bonaire:/export/sdb1/data/
Number of entries: 0

Brick curacao:/export/sdb1/data/
Number of entries: 0

(Same output on curacao, and hours after this, the md5sums on both bricks still differ.)

curacao# gluster --version
glusterfs 3.6.2 built on Mar  2 2015 14:05:34
Repository revision: git://git.gluster.com/glusterfs.git
(Same version on Bonaire)


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux