> > > * How do I cleanly shut down the bricks making sure that they remain > > > consistent? > > > > For 1.3 you have to kill the glusterfsd manually. You can get the pid > > from the pidfile (${datadir}/run/glusterfsd.pid) > > That's not a problem, my question is how do I shut down two mirrored > bricks whilst maintaining the consistency of the mirrors? Currently there is now ay. this will definitely come in 1.4 (or maybe 1.3.x itself) > > > * Could race conditions ever lead to the different bricks having > > > different data if two clients tried to write to the same mirrored file? > > > Is this the reason for using the posix-locks translator over and above > > > the posix locks on the underlying bricks? > > > > you are right, two clients writing to the same region of a file are > > expected to use posix locks to lockout their region before editing in > > an AFR scenario. > > Mirroring still raises a 'layer' issue: for an unmirrored, functioning > disk the filesystem always knows what the bits on the disk are although > locking issues may mean that the data are invalid at the application > level. A mirrored filesystem raises the additional issue that the two > mirrors may disagree about what the bits are. So, if applications fail > to use locking is there the danger that the two mirrors may end up with > different bits on their disks? (This is a similar question to the one > above.) Theoretically, if two applications are not using locks over AFR and writing to the same region, then the two mirrors can get differnt bits. though this possible is quitee remote since all clients write to the servers in the same order. the race condition would be so narrow that client1 writes to server1 and is about to write to server2 while client2 arrives (after client1 writes to server1) and writes to both server1 and serer2 before client1 writes to server2. but yes, the race condition is potentially there and Sir Lord Murphy lurks :) We are working on this corner case for 1.4 avati -- ultimate_answer_t deep_thought (void) { sleep (years2secs (7500000)); return 42; }