Just started using it...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



  Hello all, I am very interested in using gluster and I like what i
have seen so far.  I have a few questions though about some weird
behavior I am seeing.  Overall goal is to setup a replicated file
service as fully automated as possible.

1.  The access permissions seem to be a little strange.  It appears if I
want to restrict to only certain IP's it turns off NFS.  How does NFS
and the gluster client stuff work together?  Also, it seems to me that
if I change the parameters of auth.allow or nfs.rpc-auth-allow they
don't take effect immediately unless I do it while the volume is
offline?

For example see my config below.

I can mount test1 from anywhere via glusterfs and nfs, as expected...

I can mount test2 from only 10.165.20. on glusterfs, as expected.  But I
can mount nfs from anywhere, I should only be able to do it from
10.165.20.* right?

I can mount test3 from only the IP's specified as glusterfs, good!  But
the NFS daemon isn't exporting test3 at all?!?

2.  Is there a way to turn off NFS completely, to restrict only access
via the fuse client?  Is there a way to restrict ro/rw from the 'server'
perspective, or only from the client mount options?

3.  I would really like to do gluster as a pool, with directories
'exported' out, but it appears that everyone would see the full
filesystem size and quotas are not really there.  Is that a true
assumption?  If so I will continue down the path of small lvm chunks for
each gluster volume.

4.  If I wanted to automate this easier by managing the files themselves
instead of using the gluster commands, what is the best way to get
gluster to pick up the changes to the files?

Thanks in advance!

gluster volume info

Volume Name: test2
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: test8guest1:/gluster/test2/exp
Brick2: test8guest2:/gluster/test2/exp
Options Reconfigured:
nfs.rpc-auth-allow: 10.165.20.*
auth.allow: 10.165.20.*

Volume Name: test1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: test8guest1:/gluster/test1/exp
Brick2: test8guest2:/gluster/test1/exp

Volume Name: test3
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: test8guest1:/gluster/test3/exp
Brick2: test8guest2:/gluster/test3/exp
Options Reconfigured:
auth.allow: 10.165.20.245,10.165.20.246
nfs.rpc-auth-allow: 10.165.20.245,10.165.20.246


volume nfs-server
    type nfs/server
    option nfs.dynamic-volumes on
    option rpc-auth.addr.test2.allow *
    option nfs3.test2.volume-id 4e479c60-3ff5-46c8-a3d5-c9ac78e03b67
    option rpc-auth.addr.test1.allow *
    option nfs3.test1.volume-id d56ef908-d160-4176-aaef-ebc2ae1232f9
    option rpc-auth.addr.test3.allow *
    option nfs3.test3.volume-id 98a1ed18-27b6-4d06-a92d-83963e885f85
    subvolumes test2 test1 test3
end-volume



-- 
  Jason Tolsma



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux