Hi Brian, - A barrier is similar to a throttling mechanism. All it does is queue up the call_backs at the server xlator. Once barrier'ing is done, it just starts unwinding, so that clients can now get the response. The idea is that if a application does not get a acknowledgement back for the fops, it will block for sometime, hence effectively throttling itself. - Snapshot here guarantees a snap of whatever has been committed onto the disk. So, in effect every internal operation (afr/dht...) should/will have to be able to heal them-selves once the volume restore takes place. With regards, Shishir ----- Original Message ----- From: "Brian Foster" <bfoster@xxxxxxxxxx> To: "Shishir Gowda" <sgowda@xxxxxxxxxx> Cc: gluster-devel@xxxxxxxxxx Sent: Monday, August 5, 2013 6:11:47 PM Subject: Re: Snapshot design for glusterfs volumes On 08/02/2013 02:26 AM, Shishir Gowda wrote: > Hi All, > > We propose to implement snapshot support for glusterfs volumes in release-3.6. > > Attaching the design document in the mail thread. > > Please feel free to comment/critique. > Hi Shishir, Thanks for posting this. A couple questions: - The stage-1 prepare section suggests that operations are blocked (barrier) in the callback, but later on in the doc it indicates incoming operations would be held up. Does barrier block winds and unwinds, or just winds? Could you elaborate on the logic there? - This is kind of called out in the open issues section with regard to write-behind, but don't we require some kind of operational coherency with regard to cluster translator operations? Is it expected that a snapshot across a cluster of bricks might not be coherent with regard to active afr transactions (and thus potentially require a heal in the snap), for example? Brian > With regards, > Shishir > > > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > https://lists.nongnu.org/mailman/listinfo/gluster-devel >