Daniel, you don't need unify/namespace for your setup here, you can directly export gfs-ds-afr from the server which can be mounted on the client. Can you paste your client spec? if you are connecting the client to the server which goes down then client mount point becomes inaccesible, in which case you need to setup DNS round robin which will connect to the next server after which the mount point becomes accessible. This can answer your question to your other mail that was sent just now. Regards, Krishna On Wed, Apr 9, 2008 at 2:55 PM, Daniel Maher <dma+gluster@xxxxxxxxx> wrote: > On Sun, 6 Apr 2008 00:42:15 -0700 "Amar S. Tumballi" > <amar@xxxxxxxxxxxxx> wrote: > > > Hi all, > > GlusterFS-1.3.8pre5 (Release candidate for 1.3.8-stable) is > > available for download now. > > Thanks for the new release. I built the RPMs and upgraded my test > cluster on 1.3.8pre5. After restarting glusterfsd on the storage > nodes, i noticed the following warning message (which i hadn't seen > with the FC8 glusterfsd RPM) : > > 2008-04-09 09:14:57 C [unify.c:4158:init] gfs-unify: WARNING: You have > defined only one "subvolumes" for unify volume. It may not be the > desired config, review your volume spec file. If this is how you are > testing it, you may hit some performance penalty > > What does this mean, exactly ? Based on the wiki, as well as feedback > from the list, my config is set up appropriately - has the recommended > practice changed for a two-node HA / AFR cluster ? > > > My unify volume definition : > # unify the dataspace and namespace > volume gfs-unify > type cluster/unify > subvolumes gfs-ds-afr > option namespace gfs-ns-afr > # TODO: study other schedulers > option scheduler rr # internal round robin-style > scheduler end-volume > > My entire server config : > http://pastebin.ca/967749 > > > Thanks ! > > -- > Daniel Maher <dma AT witbe.net> > > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > http://lists.nongnu.org/mailman/listinfo/gluster-devel >