Thanks Ravi, that solves it. Would it have helped if I had specified AAAHOST:nufa-r4 i.e. removed the slash? Asking because the documentation for nufa uses the words sub-volumes - and as far as I understand a subvolume is a translator with a brick underneath. Thx again. hb On Tue, Oct 15, 2013 at 1:34 PM, Ravishankar N <ravishankar at redhat.com>wrote: > On 10/15/2013 02:37 PM, Hafeez Bana wrote: > > Hi All, > > I've been playing with Gluster 3.4x - in particular the nufa > configuration. All the setups are via the gluster CLI. > > A simple nufa config across 2 machines with the subvolumes being bricks > works without issue. > > However anything more complex gives very strange behaviors. For instance > the below config. > > This volume is meant to have a replica set on each machine and then tied > together into a nufa config. > > I then mount nufa4 on BBBHOST and AAAHOST and simulate some writes. This > works and the speeds seem to indicate the writes are happening locally. > > However the strange thing is if you look into any of the brick dirs i.e. > /DIR_ON_HOST/test004a/ - no files are present! > > The volume continues to operate but I am baffled about > 1) Where the files are being stored > > Hi hb, > The brick path present in the volume info for 'nufa4' tells you where the > files go. So > > Volume Name: nufa4 > . > . > > . > Brick1: AAAHOST:/nufa-r4 > Brick2: BBBHOST:/nufa-r4b > > means the files are created in the root file system, in the /nufa-r4 and > /nufa-r4b directories of your hosts. I think you have misunderstood > 'AAAHOST:/nufa-r4' and 'BBBHOST:/nufa-r4b' as referring to the 2 replica > volumes that you have created. They are actually the brick paths. > > > > 2) Whether such a configuration is possible i.e a nufa sitting on top > of 2 replica systems (and in fact how nufa figures out which replica sit is > on the local host). > > > Nufa makes sense only on distributed volumes. Enabling nufa on a plain > replicate volume has no effect- the files will be created on all bricks > that constitute the replica. > Thanks, > Ravi > > Any insight would be helpful. > > Thx. > hb > > > ---------------- > Volume Name: nufa-r4 > Type: Replicate > Volume ID: d47ceecf-11be-49a3-8b27-9d1df61f588c > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: AAAHOST:/DIR_ON_HOST_A/test004a > Brick2: AAAHOST:/DIR_ON_HOST_A/test004b > > Volume Name: nufa-r4b > Type: Replicate > Volume ID: 98bc1402-fbd9-4637-81c2-2f24029d51bc > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: BBBHOST:/DIR_ON_HOST_B/test004a > Brick2: BBBHOST:/DIR_ON_HOST_B/test004b > > Volume Name: nufa4 > Type: Distribute > Volume ID: 061a3359-ec92-4ab3-9305-bae783df0ec2 > Status: Started > Number of Bricks: 2 > Transport-type: tcp > Bricks: > Brick1: AAAHOST:/nufa-r4 > Brick2: BBBHOST:/nufa-r4b > Options Reconfigured: > cluster.nufa: on > > > _______________________________________________ > Gluster-users mailing listGluster-users at gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131015/dd875b47/attachment.html>