Re: Minimal Design Doc for Name Space Cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What would be really nice is if this could somehow be implemented in a
distributed fashion where each server has a copy of the metadata. I know
this is quite challenging as each copy needs to be kept up to date but
it would allow Glusterfs to retain its no single point of failure.

We currently have about 150TB of data in a single name space FUSE system
and the metadata consumes just over 1GB of disk space, this represents
about 3 million files. So having a copy on each server wouldn't be a
huge overhead.



On Thu, 2007-02-15 at 02:20 +0530, Krishna Srinivas wrote:
> Minimal Design Doc for Name Space Cache:
> ----------------------------------------------------------------
> 
> Name Space Cache is a directory tree structure of the GlusterFS but with
> empty files (not empty, file will contain the IP address of the server
> where the file exists)
> This helps us in two issues:
> 1) In case a server goes down, there are chances that a duplicate file name
>     is created (creation will be allowed as the original file is not seen as the
>     server has gone down) But if we have NSC info, we can make sure that
>     duplicate file is not created.
> 2) open() will be faster as the NSC will have the info of the server where the
>     file exists.
> 
> NSC has two components, server and client. NSC Server can be run on one
> of the server nodes (nscd --root /home/nsc --port 7000) (or should it be a part
> of glusterfsd?). NSC client module can be a part of glusterfs. client vol spec
> can contain the line "name-space-cache-server <IP> <port>" in unify volume.
> 
> NSC client module will give the following functions to glusterfs
> (these functions
> will mostly be used by unify xlator)
> nsc_init(IP, port)
> nsc_fini()
> nsc_query(path) - returns the IP addr of the node where file exists.
> nsc_create(path, IP) called during creation of file
> nsc_unlink(path)
> nsc_rename(oldpath, newpath, newIP)
> 
> Unify create() calls nsc_create()
> Unify unlink() will call nsc_unlink()
> Unify rename() will call nsc_rename()
> 
> Unify init() or glusterfs init() will call nsc_init()
> Unify fini() or glusterfs fini() will call nsc_fini()
> 
> Unify open() will call nsc_query() to get the IP address of the node where
> the file exits. Then it will query all its child xlator to see which of them is
> associated with that IP address and call open on that xlator. (This can be
> implemented by introducing an mop function?)
> 
> Comments and suggestions please.
> 
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux