Gluster, failed-heals, split-brains, and an odd number of replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm playing with a setup that uses a replication across two peers.  This
means that if I'm writing or deleting files, and I shut off one of the
peers, then I get failed heals and split brains.  But how would Gluster
react if I had replication across three peers?  Would it take the best
2-out-of-3 and repair the one that shows a discrepancy?

Michael
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux