On 07/03/2010 16:02, Chad wrote: > Is there a gluster developer out there working on this problem > specifically? > Could we add some kind of "sync done" command that has to be run > manually and until it is the failed node is not used? > The bottom line for me is that I would much rather run on a > performance degraded array until a sysadmin intervenes, than loose any > data. I'm only in evaluation mode at the moment, but resolving split brain is something which is terrifying me at the moment and I have been giving some thought to how it needs to be done with various solutions In the case of gluster it really does seem very important to figure out a reliable way to know when the system is fully synced again if you have had an outage. For example a not unrealistic situation if you were doing a bunch of upgrades would be: - Turn off server 1 (S1) and upgrade, server 2 (S2) deviates from S1 - Turn on server 1 and expect to sync all new changes from while we were down - key expectation here is that S1 only includes changes from S2 and never sends changes. - Some event marks sync complete so that we can turn off S2 and upgrade it The problem otherwise if you don't do the sync is that you turn off S2 and now S1 doesn't know about changes made while it's off and serves up incomplete information. Split brain can occur where a file is changed on both servers while they couldn't talk to each other and then changes must be lost... I suppose a really cool translator could be written to track changes made to an AFR group where one member is missing and then the out of sync file list would be resupplied once it was turned on again in order to speed up replication... Kind of a lot of work for a small improvement, but could be interesting to create... Perhaps some dev has some other suggestions on a "procedure" to follow to avoid split brain in the situation that we need to turn off all servers one by one in an AFR group? Thanks Ed W