RE: AFR Replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



See:   http://www.gluster.org/docs/index.php/Understanding_AFR_Translator

At the bottom of the page are examples to initiate the sync. To clarify on
this point and some of your other questions in the splitbrain thread:

Automatic re-sync due to one server being down is not available yet, but is
coming in the next release with an HA translator. For now you can do a total
re-sync manually by the method listed above, or allow the cluster re-sync
itself over time because accessing a file for a read or write will cause
that file to be synced.

You don't have to AFR from the client side, but you can. You can also do it
on the server side, or even both. Part of the beauty of glusterfs is the
simple building blocks - you can set it up any number of ways. Personally I
don't think n-fold increases in client bandwidth for mirroring is all that
bad. How many "mirrors" do you really need??  :-)   Unify is the tool for
larger aggregations, AFR is the tool for a mirror. And unifying AFR's is the
way to provide both HA and aggregation, and shares load between the servers
and clients when set up that way. In a nutshell:

The server AFR's to other servers, then unifies the AFR'd volumes, then
exports them. The clients mount the export from any given server using round
robin dns or something similar (probably will be deprecated once the HA
translator is available). That way the client needs only N*1 bandwidth (but
the servers need N* (num of AFR's)). So if you only need to keep 2x copies
of your data, you never need more than 2x the bandwidth. And no matter what
cluster filesystem you use, I can't think of a way to get 2x the files
without 2x the writes. 

The big issue right now seems to be the inability of Unify to gracefully
handle the failure of one of its subvolume servers. So in its current state,
it seems that AFR works for mirrors, Unify works for aggregation, and when
you put them together you get both but it becomes inoperable if a Unify
subvolume fails.

Re: the metadata, there is none in the traditional sense. The developers use
extended attributes on the file itself to implement versioning and determine
which file gets overwritten during a sync. 

There is no fencing and no quorum - the cluster is essentially stateless,
which is really great because if you build it right then you can't really
have a situation where split brain is possible (ok, VERY, very unlikely).
All clients connect on the same port, so if you AFR on the client side, say,
then it's tough to imagine how one client would be able to write to a server
while another client would think it was down, and yet would still have
access to another server on the same network and could write to it. Of
course if you don't consider these issues at build time, it is possible to
set yourself up for disaster in certain situations. But that's the case with
anything cluster related... All in all I think it's a tremendous filesystem
tool. 

Anyone - feel free to correct any errors I made here. 

Chris


-----Original Message-----
From: gluster-devel-bounces+chawkins=veracitynetworks.com@xxxxxxxxxx
[mailto:gluster-devel-bounces+chawkins=veracitynetworks.com@xxxxxxxxxx] On
Behalf Of gordan@xxxxxxxxxx
Sent: Friday, April 18, 2008 9:53 AM
To: Gluster Devel
Subject: Re: AFR Replication

On Fri, 18 Apr 2008, Krishna Srinivas wrote:

> Specify the following option in server spec file:
> option auth.ip.foo.allow 192.168.*,127.0.0.1

Thanks, that fixed half of the problem. However, I'm still not seeing files
that were created before the second node started.

Here is what I'm trying to do:
Start node1 server,client
On node1, create files test1 and test2.

Start node2 server,client
On node2, create file test3.

test3 exists on both node1 and node2. test1,test2 only exist on node1. I
cannot see that file through the mount point on node2. Is this the expected
behaviour? Isn't the metadata supposed to be updated so that all files are
visible from all nodes?

I am only using the AFR translator for mirroring.

Gordan


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux