[Gluster-devel] Re : HA +unify design with multiply server with multiple client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
> namespace is just a cache, which holds the directory tree of unify.  
> Please note that the files contained in this directory tree are of  
> zero byte sized.

Does that mean - that we can point the unify namespace to a local  
export(local glusterfsd) for better performance on each client and  
skip "slow" ethernet on file cache-lookups?

This would be very nice if the exported volume is very static in file  
locations.

Or does this effect posix locking, renaming of files and so on on a  
global level ?

/Thomas.

On Jan 27, 2009, at 6:41 AM, Raghavendra G wrote:

> Hi Mohan,
> please find the inlined comments.
>
> 2009/1/23 mohan L <l.mohanphy at gmail.com>
> Dear All ,
>
> I am trying to design high available and cluster set up for my  
> benchmarking .Today I read  some design information available in  
> GlusterFS home page .
>
>
> http://www.gluster.org/docs/index.php/Simple_High_Availability_Storage_with_GlusterFS_2.0#Larger_storage_using_Unify_.2B_AFR
>
>   It is configured using 6 server single client .server 1 and server  
> 2 has two  directory /export and /export-ns .
>
> volume brick1
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.1.1      # IP address of the remote brick
>
>
>  option remote-subvolume brick        # name of the remote volume
> end-volume
>
> From this I understand that
> It will mount the server1(192.168.1.1) exported directory  to client  
> machine mount point
>
> volume brick2
>
>
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.1.2
>  option remote-subvolume brick
> end-volume
>
> It will mount the server2 (192.168.1.2) exported directory  to  
> client machine mount point
>
>
>
>
>
>
> volume brick3
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.1.3
>  option remote-subvolume brick
> end-volume
>
> It will mount the server3 (192.168.1.3) exported directory  to  
> client machine mount point
>
>
>
> volume brick4
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.1.4
>  option remote-subvolume brick
> end-volume
>
> It will mount the server4 (192.168.1.4) exported directory  to  
> client machine mount point
>
>
>
> volume brick5
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.1.5
>  option remote-subvolume brick
> end-volume
>
> It will mount the server5 (192.168.1.5) exported directory  to  
> client machine mount point
>
>
>
> volume brick6
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.1.6
>  option remote-subvolume brick
> end-volume
>
> It will mount the server6 (192.168.1.6) exported directory  to  
> client machine mount point
>
>
>
> volume brick-ns1
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.1.1
>  option remote-subvolume brick-ns  # Note the different remote  
> volume name.
> end-volume
>
> It will mount the server1(192.168.1.1) exported directory (/home/ 
> export-ns/
>
>
> ) to client machine mount point
>
>
> volume brick-ns2
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.1.2
>  option remote-subvolume brick-ns  # Note the different remote  
> volume name.
>
>
> end-volume
>
> It will mount the server2(192.168.1.2) exported directory (/home/ 
> export-ns/
> ) to client machine mount point
>
>
>
> volume afr1
>  type cluster/afr
>  subvolumes brick1 brick4
> end-volume
>
>
>
> Here  brick1 replicates all files to brick4 ,Is it correct?
>
> yes.
>
>
>
> volume afr2
>  type cluster/afr
>  subvolumes brick2 brick5
> end-volume
>
> volume afr3
>  type cluster/afr
>  subvolumes brick3 brick6
> end-volume
>
>
> volume afr-ns
>  type cluster/afr
>  subvolumes brick-ns1 brick-ns2
> end-volume
>
> Here the namespace information are replicating .Is it correc?
>
> volume unify
>  type cluster/unify
>  option namespace afr-ns
>
>
>  option scheduler rr
>  subvolumes afr1 afr2 afr3
> end-volume
> what actuly unify does here?
>
> unify is used to aggregate the contents of afr1, afr2 and afr3. Say  
> following is the list of files on afr1, afr2 and afr3,
>
> afr1: file-1
> afr2: file-2
> afr3: file-3
>
> then using unify to aggregate all the three afr subvolumes results  
> in a filesystem containing all the three files.
>
> unify (of afr1, afr2, afr3): file-1, file-2, file-3.
>
>
> what is the meaning of namespace in GlusterFS?
>
> namespace is just a cache, which holds the directory tree of unify.  
> Please note that the files contained in this directory tree are of  
> zero byte sized.
>
>
> what about storage scalibality in this design? both server and  
> client. can you please give one example ?
>
> only bottleneck in scalability is the namespace node. It should be  
> able to hold the entire directory structure of unify (with zero byte  
> sized files). Other than that, a new node can be added just by  
> changing the configuration file and remounting glusterfs.
>
>
> how can do HA +unify design with multiply server with multiple  
> client?for example 8 server two client .
>
> If replication is not needed, each of the client can have unify of  
> all the volumes exported by servers.
>
> volume unify
>   type cluster/unify
>   subvolumes ha1, ha2..... ha8.
> end-volume
>
> and ha1, ha2.. ha8  provide High availability using multiple links  
> to the same server.
>
> say,
> volume ha1
>   type cluster/ha
>   subvolumes client-1a, client-1b
> end-volume
>
> where client-1a and client-1b are two different links to server1.
>
> regards,
>
>
>
>
>  any one please help me to understand those and correct me .
>
> Thanks for your time
>  L. Mohan
>
>
>
>
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
>
>
> -- 
> Raghavendra G
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://zresearch.com/pipermail/gluster-users/attachments/20090127/887e4d0b/attachment.htm 


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux