I've always wondered what the scenario for these situations are (aside
from the doc description of nodes coming up and down).
Aren't Gluster writes atomic for all nodes? I seem to recall Jeff Darcy
stating that years ago.
So a clean shutdown for maintenance shouldn't be a problem at all. If a
node didn't get a write, it is the one likely to fail.
So are we really only talking about a crash with data on the fly.
I suppose a crash during the heal phase after a shutdown could trigger
this issue, especially if you are not using sharding and had huge VM files.
On 9/7/2017 11:06 AM, Pavel Szalbot wrote:
Hi Neil, docs mention two live nodes of replica 3 blaming each other
and refusing to do IO.
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users