I am trying to design high available and cluster set up for my benchmarking .Today I read some design information available in GlusterFS home page .
http://www.gluster.org/docs/index.php/Simple_High_Availability_Storage_with_GlusterFS_2.0#Larger_storage_using_Unify_.2B_AFR
It is configured using 6 server single client .server 1 and server 2 has two directory /export and /export-ns .
volume brick1what actuly unify does here?
type protocol/client
option transport-type tcp
option remote-host 192.168.1.1 # IP address of the remote brick
option remote-subvolume brick # name of the remote volume
end-volume
From this I understand that
It will mount the server1(192.168.1.1) exported directory to client machine mount point
volume brick2
type protocol/client
option transport-type tcp
option remote-host 192.168.1.2
option remote-subvolume brick
end-volume
It will mount the server2 (192.168.1.2) exported directory to client machine mount point
volume brick3
type protocol/client
option transport-type tcp
option remote-host 192.168.1.3
option remote-subvolume brick
end-volume
It will mount the server3 (192.168.1.3) exported directory to client machine mount point
volume brick4
type protocol/client
option transport-type tcp
option remote-host 192.168.1.4
option remote-subvolume brick
end-volume
It will mount the server4 (192.168.1.4) exported directory to client machine mount point
volume brick5
type protocol/client
option transport-type tcp
option remote-host 192.168.1.5
option remote-subvolume brick
end-volume
It will mount the server5 (192.168.1.5) exported directory to client machine mount point
volume brick6
type protocol/client
option transport-type tcp
option remote-host 192.168.1.6
option remote-subvolume brick
end-volume
It will mount the server6 (192.168.1.6) exported directory to client machine mount point
volume brick-ns1
type protocol/client
option transport-type tcp
option remote-host 192.168.1.1
option remote-subvolume brick-ns # Note the different remote volume name.
end-volume
It will mount the server1(192.168.1.1) exported directory (/home/export-ns/
) to client machine mount point
volume brick-ns2
type protocol/client
option transport-type tcp
option remote-host 192.168.1.2
option remote-subvolume brick-ns # Note the different remote volume name.
end-volume
It will mount the server2(192.168.1.2) exported directory (/home/export-ns/
) to client machine mount point
volume afr1
type cluster/afr
subvolumes brick1 brick4
end-volume
Here brick1 replicates all files to brick4 ,Is it correct?
volume afr2
type cluster/afr
subvolumes brick2 brick5
end-volume
volume afr3
type cluster/afr
subvolumes brick3 brick6
end-volume
volume afr-ns
type cluster/afr
subvolumes brick-ns1 brick-ns2
end-volume
Here the namespace information are replicating .Is it correc?
volume unify
type cluster/unify
option namespace afr-ns
option scheduler rr
subvolumes afr1 afr2 afr3
end-volume
what is the meaning of namespace in GlusterFS?
what about storage scalibality in this design? both server and client. can you please give one example ?
how can do HA +unify design with multiply server with multiple client?for example 8 server two client .
any one please help me to understand those and correct me .
Thanks for your time
L. Mohan