Replies are inline, hopefully im right here.... ;-) <spec example snipped> > Note: it is assumed that host-a:/data/a, host-b:/data/b, > host-ns:/data/ns exist. > > Q. Does gluster have a problem with server spec volumes which do > not exist on a given host? I believe you are asking "what happens when you export a non-existant directory as a volume?". There should be an error in the logs correct? > From the "Aggregating Three Storage > Servers with Unify" example is does not seem like it would, since > the namespace volume is _not_ on all nodes. Right, when creating your client volume spec you define which server to get the ns volume from. See below. Line 4 defines the remote host to get the remote ns sub volume. Line 5 specifies the name of the ns subvoluem from the server spec file. However if all of your servers are using the same server volume spec file that you included in your original post, then to be correct they *should* all have a ns directory in order to not give an error. But on your clients, it is only going to use the ns directory on the server that you specify in the volume definition (line 1). 1 volume remote-ns 2 type protocol/client 3 option transport-type tcp/client 4 option remote-host host-ns 5 option remote-subvolume ns 6 end-volume > Q. How do the gluster servers know which directories to "export" > from each host? Is it based on requests from the client? I.e., The > server does not care until/unless a client asks? The client will use the bottom most volume as the mount (right devs? :-) ) Your last section of your client volume spec file should look like this. So you are defining a unify volume using subvolumes remote-a and remote-b and specifying remote-ns as the namespace for this unify volume. 1 volume unify 2 type cluster/unify 3 option scheduler rr 4 option namespace remote-ns 5 subvolumes remote-a remote-b 6 end-volume