On 10/15/2013 06:38 PM, Hafeez Bana wrote: > Thanks Ravi, that solves it. > > Would it have helped if I had specified AAAHOST:nufa-r4 i.e. removed > the slash? > Actually it wouldn't. Nufa really applies to bricks.What enabling nufa does is if you happen to mount the gluster volume on one of the machines that contained a brick of that volume, then all writes from that mount point will go that brick, irrespective of which brick the file name hashes to. Considering a distributed volume of 3 bricks on 3 machines, Brick1: A_HOST:/brick Brick2: B_HOST:/brick Brick3: C_HOST:/brick Without nufa, the files that you create from the mount point will hash to (i.e reside in) one of the 3 bricks depending on the filename. With nufa enabled, if you happen to mount the voume on B_HOST, then all files created from that mount point will always go to B_HOST:/brick irrespective of which brick the filename really hashes to. > Asking because the documentation for nufa uses the words sub-volumes - > and as far as I understand a subvolume is a translator with a brick > underneath. Well a subvolume is a brick after being processed by a translator (http://www.gluster.org/community/documentation/index.php/GlusterFS_Concepts). > > Thx again. > hb > > > On Tue, Oct 15, 2013 at 1:34 PM, Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>> wrote: > > On 10/15/2013 02:37 PM, Hafeez Bana wrote: >> Hi All, >> >> I've been playing with Gluster 3.4x - in particular the nufa >> configuration. All the setups are via the gluster CLI. >> >> A simple nufa config across 2 machines with the subvolumes being >> bricks works without issue. >> >> However anything more complex gives very strange behaviors. For >> instance the below config. >> >> This volume is meant to have a replica set on each machine and >> then tied together into a nufa config. >> >> I then mount nufa4 on BBBHOST and AAAHOST and simulate some >> writes. This works and the speeds seem to indicate the writes are >> happening locally. >> >> However the strange thing is if you look into any of the brick >> dirs i.e. /DIR_ON_HOST/test004a/ - no files are present! >> >> The volume continues to operate but I am baffled about >> 1) Where the files are being stored > Hi hb, > The brick path present in the volume info for 'nufa4' tells you > where the files go. So > > Volume Name: nufa4 > . > . > > . > Brick1: AAAHOST:/nufa-r4 > Brick2: BBBHOST:/nufa-r4b > > means the files are created in the root file system, in the > /nufa-r4 and /nufa-r4b directories of your hosts. I think you have > misunderstood 'AAAHOST:/nufa-r4' and 'BBBHOST:/nufa-r4b' as > referring to the 2 replica volumes that you have created. They are > actually the brick paths. > > > >> 2) Whether such a configuration is possible i.e a nufa sitting >> on top of 2 replica systems (and in fact how nufa figures out >> which replica sit is on the local host). >> > > Nufa makes sense only on distributed volumes. Enabling nufa on a > plain replicate volume has no effect- the files will be created on > all bricks that constitute the replica. > Thanks, > Ravi >> Any insight would be helpful. >> >> Thx. >> hb >> >> >> ---------------- >> Volume Name: nufa-r4 >> Type: Replicate >> Volume ID: d47ceecf-11be-49a3-8b27-9d1df61f588c >> Status: Started >> Number of Bricks: 1 x 2 = 2 >> Transport-type: tcp >> Bricks: >> Brick1: AAAHOST:/DIR_ON_HOST_A/test004a >> Brick2: AAAHOST:/DIR_ON_HOST_A/test004b >> >> Volume Name: nufa-r4b >> Type: Replicate >> Volume ID: 98bc1402-fbd9-4637-81c2-2f24029d51bc >> Status: Started >> Number of Bricks: 1 x 2 = 2 >> Transport-type: tcp >> Bricks: >> Brick1: BBBHOST:/DIR_ON_HOST_B/test004a >> Brick2: BBBHOST:/DIR_ON_HOST_B/test004b >> >> Volume Name: nufa4 >> Type: Distribute >> Volume ID: 061a3359-ec92-4ab3-9305-bae783df0ec2 >> Status: Started >> Number of Bricks: 2 >> Transport-type: tcp >> Bricks: >> Brick1: AAAHOST:/nufa-r4 >> Brick2: BBBHOST:/nufa-r4b >> Options Reconfigured: >> cluster.nufa: on >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131015/cecd8fcc/attachment.html>