You could either completely avoid the client/server thing and directly
have the cluster and posix translators on the client side (under fuse), or
you could use transport/unix to avoid tcp on local machines.
Thanks for the quick response and it works just as expected..
Just another question - if I mount now all the volumes localy this way
skipping the glusterfsd can I also start glusterfsd and export the volume to
be mounted on remote clients?
Basically - can the the bricks and namespace be mounted/reused by more than
one 'glusterfs(d)' instance or it will conflict and thrash the the
filesystem if clients acess files different ways?
Or even more - can an exported brick (posix filesystem as a bulk diskspace)
be a part of different storages eg in one setup it is part of afr in other
part of unify? (this is just a theoretical question to clarify how weird can
gluster be "programmed" and still remain consistent)
rr