Hi Jiffin,
Please find the attached log file.
Regards,
Abhishek
On Tue, Apr 26, 2016 at 3:55 PM, Jiffin Tony Thottan <jthottan@xxxxxxxxxx> wrote:
On 26/04/16 15:28, ABHISHEK PALIWAL wrote:
Hi Abhishek,Hi Jiffin,Any clue you have on this I am seeing some logs related to ACL in command and some .so file in glusterfs/tmp-a2.log file but no failure is there.
Can u attach the logs files (/var/log/glusterfs/tmp-a2.log)?
Also u can try out ganesha which can export gluster volumes as well as other exports using single server.
Right now ganesha only supports nfsv4 acl (not the posix acl). And also ganesha well supported with gluster volume
when we compare with knfs.
--
Jiffin
Regards,Abhishek
On Tue, Apr 26, 2016 at 1:17 PM, ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:
On Tue, Apr 26, 2016 at 12:54 PM, Jiffin Tony Thottan <jthottan@xxxxxxxxxx> wrote:
On 26/04/16 12:22, ABHISHEK PALIWAL wrote:
On Tue, Apr 26, 2016 at 12:18 PM, Jiffin Tony Thottan <jthottan@xxxxxxxxxx> wrote:
On 26/04/16 12:11, ABHISHEK PALIWAL wrote:
Hi,I want to enable ACL support on gluster volume using the kernel NFS ACL support so I have followed below steps after creation of gluster volume:
Is there any specific reason to knfs instead of in build gluster nfs server ?
Yes, because we have other NFS mounted volume as well in system.
Did u mean to say that knfs is running on each gluster nodes (i mean bricks) ?
Yes.
1. mount -t glusterfs -o acl 10.32.0.48:/c_glusterfs /tmp/a22. update the /etc/exports file/tmp/a2 10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=14)3. exportfs –ra4. gluster volume set c_glusterfs nfs.acl off5. gluster volume set c_glusterfs nfs.disable onwe have disabled above two options because we are using Kernel NFS ACL support and that is already enabled.on other board mounting it usingmount -t nfs -o acl,vers=3 10.32.0.48:/tmp/a2 /tmp/e/setfacl -m u:application:rw /tmp/e/usrsetfacl: /tmp/e/usr: Operation not supported
Can you please check the clients for the hints ?
What I need to check here?
can u check /var/log/glusterfs/tmp-a2.log?
There is no failure in server sidein /var/log/glusterfs/tmp-a2.log file but on the board where I am getting this failure don't running gluster here so not possible to check /var/log/glusterfs/tmp-a2.log file.
and application is the system user like belowapplication:x:102:0::/home/application:/bin/sh
I don't why I am getting this failure when I enabled all the acl support in each steps.
Please let me know how can I enable this.
Regards,
Abhishek
--
Jiffin
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel
--
Regards
Abhishek Paliwal
--
Regards
Abhishek Paliwal
--
Regards
Abhishek Paliwal
--
Regards
Abhishek Paliwal
[2016-04-26 10:31:58.832506] I [MSGID: 100030] [glusterfsd.c:2318:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.6 (args: /usr/sbin/glusterfs --acl --volfile-server=10.32.0.48 --volfile-id=/c_glusterfs /tmp/a2) [2016-04-26 10:31:58.851324] W [MSGID: 101012] [common-utils.c:2776:gf_get_reserved_ports] 0-glusterfs: could not open the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserved ports info [No such file or directory] [2016-04-26 10:31:58.851454] W [MSGID: 101081] [common-utils.c:2810:gf_process_reserved_ports] 0-glusterfs: Not able to get reserved ports, hence there is a possibility that glusterfs may consume reserved port [2016-04-26 10:31:58.852007] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2016-04-26 10:32:11.040212] I [MSGID: 100030] [glusterfsd.c:2318:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.6 (args: /usr/sbin/glusterfs --acl --volfile-server=10.32.0.48 --volfile-id=/c_glusterfs /tmp/a2) [2016-04-26 10:32:11.059316] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2016-04-26 10:32:11.068992] I [graph.c:269:gf_add_cmdline_options] 0-c_glusterfs-md-cache: adding option 'cache-posix-acl' for volume 'c_glusterfs-md-cache' with value 'true' [2016-04-26 10:32:11.075214] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2 [2016-04-26 10:32:11.076381] I [MSGID: 114020] [client.c:2118:notify] 0-c_glusterfs-client-0: parent translators are ready, attempting connect on transport [2016-04-26 10:32:11.077485] I [MSGID: 114020] [client.c:2118:notify] 0-c_glusterfs-client-1: parent translators are ready, attempting connect on transport [2016-04-26 10:32:11.077945] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-c_glusterfs-client-0: changing port to 49152 (from 0) Final graph: +------------------------------------------------------------------------------+ 1: volume c_glusterfs-client-0 2: type protocol/client 3: option ping-timeout 4 4: option remote-host 10.32.0.48 5: option remote-subvolume /opt/lvmdir/c2/brick 6: option transport-type socket 7: option username 2039838c-4d62-472d-840b-fadca1fb4fe8 8: option password 7e1bc4c9-4e1b-4552-ad53-e396282c74a2 9: option send-gids true 10: end-volume 11: 12: volume c_glusterfs-client-1 13: type protocol/client 14: option ping-timeout 4 15: option remote-host 10.32.1.144 16: option remote-subvolume /opt/lvmdir/c2/brick 17: option transport-type socket 18: option username 2039838c-4d62-472d-840b-fadca1fb4fe8 19: option password 7e1bc4c9-4e1b-4552-ad53-e396282c74a2 20: option send-gids true 21: end-volume 22: 23: volume c_glusterfs-replicate-0 24: type cluster/replicate 25: subvolumes c_glusterfs-client-0 c_glusterfs-client-1 26: end-volume 27: 28: volume c_glusterfs-dht 29: type cluster/distribute 30: subvolumes c_glusterfs-replicate-0 31: end-volume 32: 33: volume c_glusterfs-write-behind 34: type performance/write-behind 35: subvolumes c_glusterfs-dht 36: end-volume 37: 38: volume c_glusterfs-read-ahead 39: type performance/read-ahead 40: subvolumes c_glusterfs-write-behind 41: end-volume 42: 43: volume c_glusterfs-readdir-ahead 44: type performance/readdir-ahead 45: subvolumes c_glusterfs-read-ahead 46: end-volume 47: 48: volume c_glusterfs-io-cache 49: type performance/io-cache 50: subvolumes c_glusterfs-readdir-ahead 51: end-volume 52: 53: volume c_glusterfs-quick-read 54: type performance/quick-read 55: subvolumes c_glusterfs-io-cache 56: end-volume 57: 58: volume c_glusterfs-open-behind 59: type performance/open-behind 60: subvolumes c_glusterfs-quick-read 61: end-volume 62: 63: volume c_glusterfs-md-cache 64: type performance/md-cache 65: option cache-posix-acl true 66: subvolumes c_glusterfs-open-behind 67: end-volume 68: 69: volume c_glusterfs 70: type debug/io-stats 71: option latency-measurement off 72: option count-fop-hits off 73: subvolumes c_glusterfs-md-cache 74: end-volume 75: 76: volume posix-acl-autoload 77: type system/posix-acl 78: subvolumes c_glusterfs 79: end-volume 80: 81: volume meta-autoload 82: type meta 83: subvolumes posix-acl-autoload 84: end-volume 85: +------------------------------------------------------------------------------+ [2016-04-26 10:32:11.079255] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-c_glusterfs-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-04-26 10:32:11.079643] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-c_glusterfs-client-1: changing port to 49153 (from 0) [2016-04-26 10:32:11.079817] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-c_glusterfs-client-0: Connected to c_glusterfs-client-0, attached to remote volume '/opt/lvmdir/c2/brick'. [2016-04-26 10:32:11.079859] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-c_glusterfs-client-0: Server and Client lk-version numbers are not same, reopening the fds [2016-04-26 10:32:11.080183] I [MSGID: 108005] [afr-common.c:3841:afr_notify] 0-c_glusterfs-replicate-0: Subvolume 'c_glusterfs-client-0' came back up; going online. [2016-04-26 10:32:11.080261] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-c_glusterfs-client-0: Server lk version = 1 [2016-04-26 10:32:11.081116] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-c_glusterfs-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-04-26 10:32:11.081818] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-c_glusterfs-client-1: Connected to c_glusterfs-client-1, attached to remote volume '/opt/lvmdir/c2/brick'. [2016-04-26 10:32:11.081860] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-c_glusterfs-client-1: Server and Client lk-version numbers are not same, reopening the fds [2016-04-26 10:32:11.092105] I [fuse-bridge.c:5137:fuse_graph_setup] 0-fuse: switched to graph 0 [2016-04-26 10:32:11.092493] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-c_glusterfs-client-1: Server lk version = 1 [2016-04-26 10:32:11.092573] I [fuse-bridge.c:4030:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.22 [2016-04-26 10:32:11.094678] I [MSGID: 108031] [afr-common.c:1782:afr_local_discovery_cbk] 0-c_glusterfs-replicate-0: selecting local read_child c_glusterfs-client-0 [2016-04-26 10:34:06.155901] E [socket.c:2278:socket_connect_finish] 0-glusterfs: connection to 10.32.0.48:24007 failed (Connection timed out) [2016-04-26 10:34:06.156024] E [glusterfsd-mgmt.c:1818:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: 10.32.0.48 (Transport endpoint is not connected) [2016-04-26 10:34:06.156049] I [glusterfsd-mgmt.c:1824:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2016-04-26 10:34:06.156189] W [glusterfsd.c:1236:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify-0x18eb4) [0x3fff791997bc] -->/usr/sbin/glusterfs() [0x100106ac] -->/usr/sbin/glusterfs(cleanup_and_exit-0x1c02c) [0x100098bc] ) 0-: received signum (1), shutting down [2016-04-26 10:34:06.156283] I [fuse-bridge.c:5683:fini] 0-fuse: Unmounting '/tmp/a2'. [2016-04-26 10:34:06.165563] W [glusterfsd.c:1236:cleanup_and_exit] (-->/lib64/libpthread.so.0() [0x8040b6b730] -->/usr/sbin/glusterfs(glusterfs_sigwaiter-0x1bdfc) [0x10009b04] -->/usr/sbin/glusterfs(cleanup_and_exit-0x1c02c) [0x100098bc] ) 0-: received signum (15), shutting down
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users