Ensure that namespace volume is actually a shared volume across clients and not a local posix volume at the client. The documentation is still under way to completion. Thank you for being patient :) thanks avati 2007/7/1, Harris Landgarten <harrisl@xxxxxxxxxxxxx>:
You have to export a namespace vol somewhere. I put it on an existing brick this is my spec file for that brick: ### Export volume "brick" with the contents of "/home/export" directory. volume posix2 type storage/posix # POSIX FS translator option directory /mnt/export # Export this directory end-volume volume io-threads type performance/io-threads option thread-count 1 subvolumes posix2 end-volume ### Add POSIX record locking support to the storage brick volume brick type features/posix-locks option mandatory on # enables mandatory locking on all files subvolumes io-threads end-volume ### Add network serving capability to above brick. volume server type protocol/server option transport-type tcp/server # For TCP/IP transport option listen-port 6996 # Default is 6996 subvolumes brick option auth.ip.brick.allow * # Allow access to "brick" volume end-volume volume posix3 type storage/posix option directory /mnt/namespace end-volume volume io-threads1 type performance/io-threads option thread-count 1 subvolumes posix3 end-volume volume brick-ns type features/posix-locks option mandatory on subvolumes io-threads1 end-volume volume server type protocol/server option transport-type tcp/server option listen-port 6997 subvolumes brick-ns option auth.ip.brick-ns.allow * end-volume Then in the client specify the nameserver vol as a client and use that client in the unify volume: volume client-ns type protocol/client option transport-type tcp/client # for TCP/IP transport option remote-host 10.255.57.171 # IP address of the remote brick option remote-port 6997 # default server port is 6996 option remote-subvolume brick-ns # name of the remote volume end-volume volume bricks type cluster/unify option namespace client-ns # this will not be storage child of unify. subvolumes client1 client2 etc. Harris ----- Original Message ----- From: "DeeDee Park" <deedee6905@xxxxxxxxxxx> To: gluster-devel@xxxxxxxxxx Sent: Saturday, June 30, 2007 10:05:19 PM (GMT-0500) America/New_York Subject: [Gluster-devel] cluster/unify "option namespace <node>" - self healing feature? Anyone have any more info on the new "option namespace <node>" in the unify xlator. bulde wrote some new docs today, but it didn't really answer my question as what <node> is supose to be. It seems to have broken my setup. Is it supose to be an existing volume? is it supose to be "namespace-child"? is it supose to be ???? i tried all of them, but nothing seems to work, and glusterfs seems to exit without it set. Y _________________________________________________________________ Don�t miss your chance to WIN $10,000 and other great prizes from Microsoft Office Live http://clk.atdmt.com/MRT/go/aub0540003042mrt/direct/01/ _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxx http://lists.nongnu.org/mailman/listinfo/gluster-devel _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxx http://lists.nongnu.org/mailman/listinfo/gluster-devel
-- Anand V. Avati