Re: Adding nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




It looks like you are using the same brick in multiple AFR definitions, which won't work. See http://www.gluster.org/docs/index.php/Mixing_Unify_and_Automatic_File_Replication for an example of what you look to be trying to do.

To answer your original question, files within an AFR are healed from one node to the other when the file is accessed (actually read) through the AFR and one node is found to have more recent data than others.

You can make sure a particular file is healed by running head -c1 on it (you can send the output to /dev/null if you like), and you can make sure a whole AFR is synced by running a find on it and executing head -c1 on all the files found. See http://www.gluster.org/docs/index.php/Understanding_AFR_Translator for a more complete example and much more information.

Marcus Herou wrote:
Hi.

I have a question regarding expanding a glusterfs system which
probably been answered but still.

Lets say I use Unify over AFR and have 3 nodes where each file is
replicated twice.

Probably first hand a client file like the one on the web.

volume brick1
 type protocol/client
 option transport-type tcp/client     # for TCP/IP transport
 option remote-host 192.168.1.1      # IP address of the remote brick
 option remote-subvolume brick        # name of the remote volume
end-volume

volume brick2
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.1.2
 option remote-subvolume brick
end-volume

volume brick3
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.1.3
 option remote-subvolume brick
end-volume

volume brick-ns1
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.1.1
 option remote-subvolume brick-ns
end-volume

volume brick-ns2
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.1.2
 option remote-subvolume brick-ns
end-volume

volume afr1
 type cluster/afr
 subvolumes brick1 brick2
end-volume

volume afr2
 type cluster/afr
 subvolumes brick2 brick3
end-volume

volume afr-ns
 type cluster/afr
 subvolumes brick-ns1 brick-ns2
end-volume

volume unify
 type cluster/unify
 option namespace afr-ns
 option scheduler rr
 subvolumes afr1 afr2
end-volume

And after I add another node I add the following:

....
volume brick4
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.1.4
 option remote-subvolume brick
end-volume
....
volume afr3
 type cluster/afr
 subvolumes brick3 brick4
end-volume
....
volume unify
 type cluster/unify
 option namespace afr-ns
 option scheduler rr
 subvolumes afr1 afr2 afr3
end-volume


So the question is really: Will the new node4 get the data from node3
automatically ?


I appreciate any answers.

Kindly

//Marcus





--
Marcus Herou CTO and co-founder Tailsweep AB
+46702561312
marcus.herou@xxxxxxxxxxxxx
http://www.tailsweep.com/
http://blogg.tailsweep.com/


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel
.



--

-Kevan Benson
-A-1 Networks




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux