On 03/13/2014 04:50 PM, Brock Nanson wrote:
Yeah... I found the Joe Julian "do's and don'ts" blog post that pretty much says I shouldn't have started down this road too late. But I have started down the road, so I'd like to make the best of it. (http://joejulian.name/blog/glusterfs-replication-dos-and-donts/)
...
2) I've seen it suggested that the write function isn't considered complete until it's complete on all bricks in the volume. My write speeds would seem to confirm this.
Yes, the write will return when all replicas are written. AKA synchronous replication. Usually "replication" means "synchronous replication".
Is this correct and is there any way to cache the data and allow it to trickle over the link in the background?
You're talking about asynchronous replication. Which GlusterFS calls "geo-replication".
I'm thinking about the write-behind-window size setting,
etc. It would be nice if something like DRBD Protocol A could be implemented, where writes are considered complete when the fast local one is done. I realize the potential for data loss if something goes wrong, but in my case the heal would take care of almost every scenario I can envision. Geo-replication would seem to be the ideal solution, except for the fact that it apparently only works in one direction (although it was evidently hoped it would be upgraded in 3.4.0 to go in both directions I understand).
So if you allow replication to be delayed, and you allow writes on both sides, how would you deal with the same file simultaneously being written on both sides. Which would win in the end?
So are there any configuration tricks (write-behind, compression etc) that might help me out? Is there a way to fool geo-replication into working in both directions, recognizing my application isn't seeing serious read/write activity and some reasonable amount of risk is acceptable?
You're basically talking about running rsyncs in both directions. How will you handle any file conflicts?
-- Alex Chekholko chekh@xxxxxxxxxxxx _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users