Bricks are running on XFS, which supports and enables ACLs by default. Default ACL (which can be inherited and applied to files in directories) on XFS is not set automatically, and that makes sense as default ACL can be set to anything admin sees suitable for particular case therefore setting it automatically to some predefined value would make more harm than good.
J.
On Fri, Jul 31, 2015 at 05:01:03PM +0300, Jüri Palis wrote: Yes, that’s the case here. ACLs do not “stick” to directories on NFS mounts when default ACL is not in place. Independent (user, group, etc) ACL entires can be set and displayed correctly right after setting default ACL.
BTW. I discovered another side effect, when default ACL is set you can’t remove this ACL from directory on NFS mount (setfacl -k dir) but it woks correctly when native mount is used.
I've never heard of directories that do not have default ACLs. Do you have the filesystem on the bricks mounted with the "acl" option in case the filesystem does not enable ACLs by default? Which filesystem are you using on the bricks? Thanks, Niels Regards, J.
On 31 Jul 2015, at 15:47, Soumya Koduri <skoduri@xxxxxxxxxx> wrote:
The issue here is the way we fetch ACLs on a directory.
If 'directory', we first fetch DEFAULT_ACL and then fetch ACCESS_ACL. But there seems like a bug in the code. If there is no default ACL set on a directory, we seem to be bailing out and skip fetching ACCESS_ACL.
Here is the code snippet -->
324 /* acl3_default_getacl_cbk: fetch and decode the ACL set in the 325 * POSIX_ACL_DEFAULT_XATTR xattr. 326 * 327 * The POSIX_ACL_DEFAULT_XATTR xattr is only set on directories, not on files. 328 * 329 * When done with POSIX_ACL_DEFAULT_XATTR, we also need to get and decode the 330 * ACL that can be set in POSIX_ACL_DEFAULT_XATTR. 331 */ 332 int 333 acl3_default_getacl_cbk (call_frame_t *frame, void *cookie, xlator_t *this, 334 int32_t op_ret, int32_t op_errno, dict_t *dict, 335 dict_t *xdata) 336 { ........ ....... ........ 353 if ((op_ret < 0) && (op_errno != ENODATA && op_errno != ENOATTR)) { 354 stat = nfs3_cbk_errno_status (op_ret, op_errno); 355 goto err; 356 } else if (!dict) { 357 /* no ACL has been set */ 358 stat = NFS3_OK; 359 goto err; 360 } ...... ...... ...... 385 ret = nfs_getxattr (cs->nfsx, cs->vol, &nfu, &cs->resolvedloc, 386 POSIX_ACL_ACCESS_XATTR, NULL, acl3_getacl_cbk, cs); 387 if (ret < 0) { 388 stat = nfs3_errno_to_nfsstat3 (-ret); 389 goto err; 390 } 391 392 return 0; 393 394 err: 395 if (getaclreply) 396 getaclreply->status = stat; 397 acl3_getacl_reply (cs->req, getaclreply); 398 nfs3_call_state_wipe (cs); 399 return 0; 400 } <----
To verify this, along with ACCESS ACL, please set an inherit/default ACL on that directory and check getfacl output.
Thanks, Soumya
On 07/31/2015 05:51 PM, Soumya Koduri wrote:
On 07/31/2015 05:33 PM, Jüri Palis wrote:
Playing around with my GlusterFS test setup I discovered following anomaly
On volume with low access traffic, ACLs on directories (managed over NFS) sort of work, I can add new ACL entry to directory and when I execute getfacl right after setting ACL, it displays correct settings. However this data is never replicated to another GlusterFS node hosting this particular volume and ACL disappears few minutes after setting it.
Same operation when performed on file works correctly, ACL entry set to particular file on one GlusterFS NFS server is replicated to participating node almost immediately.
So, it would be interesting to see if someone here can replicate this anomaly.
J.
I could reproduce this similar issue - After remounting the volume, the directory ACLs are not displayed. I shall look further into this and update the findings.
# getfacl /mnt/dir5 getfacl: Removing leading '/' from absolute path names # file: mnt/dir5 # owner: root # group: root user::rwx group::r-x group:tmpgroup:r-x mask::r-x other::r-x # # umount /mnt # mount -t nfs 10.70.xx.xx:/vol0 /mnt # getfacl /mnt/dir5 getfacl: Removing leading '/' from absolute path names # file: mnt/dir5 # owner: root # group: root user::rwx group::r-x other::r-x #
Though these ACLs are displayed when done getfacl using brick-path directly.
Thanks, Soumya
On 31 Jul 2015, at 09:35, Soumya Koduri <skoduri@xxxxxxxxxx> wrote:
I have tested it using the gluster-NFS server with GlusterFS version 3.7.* running on a RHEL7 machine and RHEL 6.7 as NFS client. ACLs with named groups got properly set on the directory.
Could you please provide us the packet trace (better taken on the server side so that we can check Gluster operations too) while doing setfacl and getfacl ?
Thanks, Soumya
On 07/30/2015 07:38 PM, Jüri Palis wrote:
Hi,
Mounted GlusterFS volume with native mount and ACL’s are working as expected, mounted same volume with nfs protocol and the result is exactly the same as I described below. ACL set to files work and ACL set to directory do not work as expected. Ohh, I’m out of ideas :(
J. On 30 Jul 2015, at 16:38, Jüri Palis <jyri.palis@xxxxxxxxx <mailto:jyri.palis@xxxxxxxxx>> wrote:
[2015-07-30 13:16:01.002296] T [rpcsvc.c:316:rpcsvc_program_actor] 0-rpc-service: Actor found: ACL3 - SETACL for 10.1.1.32:742 [2015-07-30 13:16:01.002325] T [MSGID: 0] [acl3.c:672:acl3svc_setacl] 0-nfs-ACL: FH to Volume: acltest [2015-07-30 13:16:01.004287] T [rpcsvc.c:1319:rpcsvc_submit_generic] 0-rpc-service: submitted reply for rpc-message (XID: 0x16185ddc, Program: ACL3, ProgVers: 3, Proc: 2) to rpc-transport (socket.nfs-server) [2015-07-30 13:16:22.823894] T [rpcsvc.c:316:rpcsvc_program_actor] 0-rpc-service: Actor found: ACL3 - GETACL for 10.1.1.32:742 [2015-07-30 13:16:22.823900] T [MSGID: 0] [acl3.c:532:acl3svc_getacl] 0-nfs-ACL: FH to Volume: acltest [2015-07-30 13:16:22.824218] D [MSGID: 0] [client-rpc-fops.c:1156:client3_3_getxattr_cbk] 0-acltest-client-1: remote operation failed: No data available. Path: <gfid:12f02b4f-a181-47d4-9b5b-69e889483570> (12f02b4f-a181-47d4-9b5b-69e889483570). Key: system.posix_acl_default [2015-07-30 13:16:22.825675] D [MSGID: 0] [client-rpc-fops.c:1156:client3_3_getxattr_cbk] 0-acltest-client-0: remote operation failed: No data available. Path: <gfid:12f02b4f-a181-47d4-9b5b-69e889483570> (12f02b4f-a181-47d4-9b5b-69e889483570). Key: system.posix_acl_default [2015-07-30 13:16:22.825713] T [rpcsvc.c:1319:rpcsvc_submit_generic] 0-rpc-service: submitted reply for rpc-message (XID: 0x63815edc, Program: ACL3, ProgVers: 3, Proc: 1) to rpc-transport (socket.nfs-server) [2015-07-30 13:16:22.828243] T [rpcsvc.c:316:rpcsvc_program_actor] 0-rpc-service: Actor found: ACL3 - SETACL for 10.1.1.32:742 [2015-07-30 13:16:22.828266] T [MSGID: 0] [acl3.c:672:acl3svc_setacl] 0-nfs-ACL: FH to Volume: acltest [2015-07-30 13:16:22.829931] T [rpcsvc.c:1319:rpcsvc_submit_generic] 0-rpc-service: submitted reply for rpc-message (XID: 0x75815edc, Program: ACL3, ProgVers: 3, Proc: 2) to rpc-transport (socket.nfs-server)
Enabled trace for few moments and tried to make any sense of it by searching for lines containing ‘acl’ according to this everything kind of works except lines which state that “remote operation failed” GlusterFS failed to replicate or commit acl changes?
On 07/30/2015 06:22 PM, Jüri Palis wrote:
Hi,
Thanks Niels, your hints about those two options did the trick although I had to enable both of them and I had to add nscd (sssd provides user identities) to this mix as well.
Now back to the problem with ACL’s. Is your test setup something like this: GlusterFS 3.7.2 replicated volume on Centos/RHEL 7 and client or clients accessing GlusterFS volumes by NFS protocol, correct?
As Jiffin had suggested, did you try the same command on GlusterFS Native mount?
Log levels can be increased to TRACE/DEBUG mode using the command 'gluster vol set <volname> diagnostics.client-log-level [TRACE,DEBUG]'
Also please capture a packet trace on the server-side using the command - 'tcpdump -i any -s 0 -w /var/tmp/nfs-acl.pcap tcp and not port 22'
Verify the packets sent by Gluster-NFS process to the brick process to set the ACL.
Thanks, Soumya
# gluster volume info acltest Volume Name: acltest Type: Replicate Volume ID: 9e0de3f5-45ba-4612-a4f1-16bc5d1eb985 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: vfs-node-01:/data/gfs/acltest/brick0/brick Brick2: vfs-node-02:/data/gfs/acltest/brick0/brick Options Reconfigured: server.manage-gids: on nfs.server-aux-gids: on performance.readdir-ahead: on server.event-threads: 32 performance.cache-size: 2GB storage.linux-aio: on nfs.disable: off performance.write-behind-window-size: 1GB performance.nfs.io-cache: on performance.nfs.write-behind-window-size: 250MB performance.nfs.stat-prefetch: on performance.nfs.read-ahead: on performance.nfs.io-threads: on cluster.readdir-optimize: on network.remote-dio: on auth.allow: 10.1.1.32,10.1.1.42 diagnostics.latency-measurement: on diagnostics.count-fop-hits: on nfs.rpc-auth-allow: 10.1.1.32,10.1.1.42 nfs.trusted-sync: on
Maybe there is a way to increase verbosity of nfs server which could help me to trace this problem. I did not find any good hints for increasing verbosity of nfs server in documentation.
Regards, J.
On 30 Jul 2015, at 10:09, Jiffin Tony Thottan <jthottan@xxxxxxxxxx <mailto:jthottan@xxxxxxxxxx> <mailto:jthottan@xxxxxxxxxx>> wrote:
On 29/07/15 20:14, Niels de Vos wrote:
On Wed, Jul 29, 2015 at 05:22:31PM +0300, Jüri Palis wrote:
Hi,
Another issue with NFS and sec=sys mode. As we all know there is a limit of 15 security ids involved when running NFS in sec=sys mode. This limit makes effective and granular usage of ACL assigned through groups almost unusable. One way to overcome this limit is to use kerberised NFS but GlusterFS does not natively support this access mode . Another option, at least according to one email thread, states that GlusterFS has an option server.manage-gids which should mitigate this limit and raise it to 90 something. Is this the option, which can be used for increasing sec=sys limit. Sadly documentation does not have clear description about this option, what exactly this option does and how it should be used.
server.manage-gids is an option to resolve the groups of a uid in the brick process. You probably need to also use the nfs.server-aux-gids option so that the NFS-server resolves the gids of the uid accessing the NFS-server.
The nfs.server-aux-gids option is used to overcome the AUTH_SYS/AUTH_UNIX limit of (I thought 32?) groups.
The server.manage-gids option is used to overcome the GlusterFS protocol limit of ~93 groups.
If your users do not belong to 90+ groups, you would not need to set the server.manage-gids option, and nfs.server-aux-gids might be sufficient.
HTH, Niels
J.
On 29 Jul 2015, at 16:16, Jiffin Tony Thottan <jthottan@xxxxxxxxxx <mailto:jthottan@xxxxxxxxxx> <mailto:jthottan@xxxxxxxxxx>> wrote:
On 29/07/15 18:04, Jüri Palis wrote:
Hi,
setfacl for dir on local filesystem:
1. set acl setfacl -m g:x_meie_sec-test02:rx test 2. get acl
# getfacl test user::rwx group::r-x group:x_meie_sec-test02:r-x mask::r-x other::r-x
setfacl for dir on GlusterFS volume which is NFS mounted to client system
1. same command is used for setting ACE, no error is returned by that command 2. get acl
#getfacl test user::rwx group::r-x other::---
If I use ordinary file as a target on GlusterFS like this
setfacl -m g:x_meie_sec-test02:rw dummy
then ACE entry is set for file dummy stored on GlusterFS
# getfacl dummy user::rw- group::r-- group:x_meie_sec-test02:rw- mask::rw- other::—
So, as you can see setting ACLs for files works but does not work for directories.
This all is happening on CentOS7, running GlusterFS 3.7.2
Hi Jyri,
It seems there are couple of issues ,
1.) when u set a named group acl for file/directory, it clears the permission of others too. 2.) named group acl is not working properly for directories ,
I will try the same on my setup and share my findings. -- Jiffin
In my setup (glusterfs 3.7.2 and RHEL 7.1 client) it worked properly
I followed the same steps mentioned by you. #cd /mnt # mkdir dir # touch file # getfacl file # file: file # owner: root # group: root user::rw- group::r-- other::r--
# getfacl dir # file: dir # owner: root # group: root user::rwx group::r-x other::r-x
# setfacl -m g:gluster:rw file # getfacl file # file: file # owner: root # group: root user::rw- group::r-- group:gluster:rw- mask::rw- other::r--
setfacl -m g:gluster:r-x dir getfacl dir # file: dir # owner: root # group: root user::rwx group::r-x group:gluster:r-x mask::r-x other::r-x
So can u share the following information from the server. 1.) gluster vol info 2.) nfs.log (nfs-server log) 3.) brick logs
and also can u try the same on fuse mount(gluster native mount).
-- Jiffin
J. On 29 Jul 2015, at 15:16, Jiffin Thottan <jthottan@xxxxxxxxxx <mailto:jthottan@xxxxxxxxxx> <mailto:jthottan@xxxxxxxxxx>> wrote:
----- Original Message ----- From: "Jüri Palis" <jyri.palis@xxxxxxxxx <mailto:jyri.palis@xxxxxxxxx> <mailto:jyri.palis@xxxxxxxxx>> To:gluster-users@xxxxxxxxxxx <mailto:gluster-users@xxxxxxxxxxx><mailto:gluster-users@xxxxxxxxxxx>
Sent: Wednesday, July 29, 2015 4:19:20 PM Subject: GlusterFS 3.7.2 and ACL
Hi
Setup: GFS 3.7.2, NFS is used for host access
Problem: POSIX ACL work correctly when ACLs are applied to files but do not work when ACLs are applied to directories on GFS volumes.
How can I debug this issue more deeply?
Can you please explain the issue with more details, i.e what exactly not working properly , is it setting acl or any functionality issue, in which client? __ Jiffin
Regards, Jyri _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx><mailto:Gluster-users@xxxxxxxxxxx>
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx><mailto:Gluster-users@xxxxxxxxxxx>
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx><mailto:Gluster-users@xxxxxxxxxxx>
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx><mailto:Gluster-users@xxxxxxxxxxx>
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx><mailto:Gluster-users@xxxxxxxxxxx>
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx><mailto:Gluster-users@xxxxxxxxxxx>
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx> http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users
|