On Wed, 23 Apr 2008, Krishna Srinivas wrote:
It is a wrong setup. Few syscalls would hang and few would go to infinite loop.
Just guessing but things will go wrong.
Correct setup will have each server exporting two volumes
1) AFR vol to be used by clients (not the other server)
So, the setup below would be OK for the client mounted volume? Or would
both volumes need to be of protocol/client type, with one mounted via
loopback?
2) storage/posix vol to be used by the AFR vol on the other server.
So foo1 would need to be in a separate volume definition file, and
exported on it's own?
How would the changes propagate in this case? I'm guessing that client
mounted AFR volume would have to consist of two protocol/client volumes,
one local and one remote. But would this not lead to the same looping
condition?
Gordan
On Wed, Apr 23, 2008 at 8:14 PM, <gordan@xxxxxxxxxx> wrote:
I'm trying to do server-side AFR, and the sort of thing I'm coming up with
is a bit like the following:
server.vol
->snip
volume foo1
type storage/posix
option directory /gluster
end-volume
volume foo2
type protocol/client
option transport-type tcp/client
option remote-host 192.168.0.1
option remote subvolume foo
end-volume
volume foo
type cluster/afr
subvolumes foo1 foo2
end-volume
volume server
type protocol/server
option transport-type tcp/server
subvolumes foo
option auth.ip.foo.allow 127.0.0.1,192.168.*
end-volume
<-snap
The only difference between the two servers is the IP address in the remote
AFR block (192.168.0.2 instead of .1).
The question I have is - would this cause a circular replication meltdown?
Or are loops somehow detected/prevented/avoided? Effectively, the client
would connect to one server only, and upload the data, which would get
replicated to the other server, which, since it also replicates back,
replicates the file back, which triggers the local server to replicate, etc,
etc, etc.
What prevents this sort of thing from occuring, and is there a better way
to achieve this kind of a setup?
Gordan
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel