Michael Cassaniti wrote:
On 03/25/10 10:21, Gordan Bobic wrote:
Christopher Hawkins wrote:
Correct me if I'm wrong, but something I would add to this debate is
the type of split brain we are talking about. Glusterfs is quite
different from GFS or OCFS2 in a key way, in that it is an overlay FS
that uses locking to control who writes to the underlying files and
how they do it.
It is not a cluster FS the way GFS is a cluster FS. For example if
GFS has split brain, then fencing is the only thing preventing the
complete destruction of all data as both nodes (assuming only two)
write to the same disk at the same time and utterly destroy the
filesystem. But glusterfs is passing writes to EXT3 or whatever, and
at worst you get out of date files or lost updates, not a useless
partition that used to have your data...
I think less stringent controls are appropriate in this case, and
that GFS / OCFS2 are entirely different animals when it comes to how
severe a split brain can be. They MUST be strict about fencing, but
with Glusterfs you have a choice about how strict you need it to be.
Not really. The only reason it is less bad is because the corruption
will affect individual files, rather than the complete file system.
Granted, this is much better than hosing the entire file system, but
the fact remains that you get left with files that cannot be healed
without manual intervention or explicitly specifying which node should
win with the favorite-child option.
Gordon,
Can you suggest how you would successfully manage to get the first node
in your scenario in sync?
The point is that getting things in sync after split-brain isn't
possible without throwing away at least some changes. The only way to
deal with it is to now allow it to desync in the first place.
If I have your mentioned scenario right, including what you believe
should happen:
* First node goes down. Simple enough.
* Second node has new file operations performed on it that the first
node does not get.
* First node comes up. It is completely fenced from all other
machines to get itself in sync with the second node.
* Second node goes down. Is it before/after first node is synced?
o If it is before then you have a fully isolated FS that is
not accessible.
o If it is after then you don't have a problem.
I would suggest writing a script and performing some firewalling to
perform the fencing.
This is not really good enough - you need an out-of-band fencing device
that you can use to forcibly down the node that disconnected, e.g.
remote power-off by power management (e.g. UPS or a network controllable
power bar) or remote server management (Dell DRAC, Raritan eRIC G4, HP
iLO, Sun LOM, etc.). When the node gets rebooted, it has to notice there
are other nodes already up and specifically set itself into such a mode
that it will lose any contest on being the source node for resync until
it has fully checked all the files' metadata against it's peers.
I believe you can run ls -R on the file-system to
get it in sync. You would need to mount glfs locally on the first node,
get it in sync, then open the firewall ports afterward. Is that an
appropriate solution?
The problem is that firewalling would have to be applied by every node
other than the node that dropped off, and this would need to be
communicated to all the other nodes, and they would have to confirm
before the fencing action is deemed to have succeeded. This is a lot
more complex and error prone compared to just using a single point of
fencing for each node such as a network controlled power bar.
(e.g.
http://www.linuxfordevices.com/c/a/News/Entrylevel-4port-IP-power-switch-runs-Linux/
)
Gordan