ZFS by default stores the extended attributes in a hidden directory instead of extending the file inode size like what XFS do!
There is a problem in ZFS on Linux implementation which the function responsible for deleting the files, it deletes only the files and forgets to delete the hidden directory which stores the extended attributes, and there seem no interest from ZFS on Linux community to fix this bug, it’s filed since 2013.
There is an option to change this to write the extended attributes to the inode instead of a hidden directory.
# zfs set xattr=sa <POOL_NAME>
The problem turns out to be with ZFS I use ZFS on linux as the underlying filesystem and as per the docs you need to set zfs set acltype=posixacl on all the pools you are exporting as NFS. I did not do this and it causes chaos. Once I did this the NFS issues go away and all is good. Thanks everyone! On 3/8/16 6:42 AM, Soumya Koduri wrote: The log file didn't have any errors logged. Please check the NFS client logs in '/var/log/messages' or using dmesg and brick logs as well.
Probably strace or packet trace could help too. You could use the below command to capture the pkt trace while running the I/Os on the node where gluster-nfs server is running :
$ tcpdump -i any -s 0 -w /var/tmp/nfs.pcap tcp and not port 22
Check the file later to see what operation had failed (using filters: nfs, glusterfs).
Thanks, Soumya
On 03/07/2016 09:41 PM, Mark Selby wrote:
Here are the logs that you requested
Please let me know if I can send you anything else.
I really appreciate you taking a look at this - thanks!
root@dc1strg001x /root 547# gluster vol info backups
Volume Name: backups Type: Replicate Volume ID: 71a26ea6-632d-4a1d-8610-e782ce2a5100 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: dc1strg001x:/zfspool/glusterfs/backups/data Brick2: dc1strg002x:/zfspool/glusterfs/backups/data Options Reconfigured: nfs.disable: off
root@dc1strg001x /var/log/glusterfs 551# cat nfs.log [2016-03-07 16:03:14.257919] I [MSGID: 100030] [glusterfsd.c:2318:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.6 (args: /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/gluster/ad38be3bd1baece29e1b672e6659ae60.socket) [2016-03-07 16:03:14.267862] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2016-03-07 16:03:14.273283] I [rpcsvc.c:2215:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 16 [2016-03-07 16:03:14.284154] W [MSGID: 112153] [mount3.c:3929:mnt3svc_init] 0-nfs-mount: Exports auth has been disabled! [2016-03-07 16:03:14.306163] I [rpc-drc.c:694:rpcsvc_drc_init] 0-rpc-service: DRC is turned OFF [2016-03-07 16:03:14.306216] I [MSGID: 112110] [nfs.c:1494:init] 0-nfs: NFS service started [2016-03-07 16:03:14.312901] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2 [2016-03-07 16:03:14.314078] W [graph.c:357:_log_if_unknown_option] 0-nfs-server: option 'rpc-auth.auth-glusterfs' is not recognized [2016-03-07 16:03:14.314137] W [graph.c:357:_log_if_unknown_option] 0-nfs-server: option 'rpc-auth-allow-insecure' is not recognized [2016-03-07 16:03:14.314185] W [graph.c:357:_log_if_unknown_option] 0-nfs-server: option 'transport-type' is not recognized [2016-03-07 16:03:14.314270] I [MSGID: 114020] [client.c:2118:notify] 0-backups-client-0: parent translators are ready, attempting connect on transport [2016-03-07 16:03:14.315341] I [MSGID: 114020] [client.c:2118:notify] 0-backups-client-1: parent translators are ready, attempting connect on transport [2016-03-07 16:03:14.315923] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-backups-client-0: changing port to 49152 (from 0) Final graph: +------------------------------------------------------------------------------+
1: volume backups-client-0 2: type protocol/client 3: option ping-timeout 42 4: option remote-host dc1strg001x 5: option remote-subvolume /zfspool/glusterfs/backups/data 6: option transport-type socket 7: option username 42fa7a62-1420-4169-ad00-53c3481dbe5b 8: option password b71b3c88-51e0-464c-8b09-14b661fdb4d3 9: option send-gids true 10: end-volume 11: 12: volume backups-client-1 13: type protocol/client 14: option ping-timeout 42 15: option remote-host dc1strg002x [2016-03-07 16:03:14.317412] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-backups-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) 16: option remote-subvolume /zfspool/glusterfs/backups/data 17: option transport-type socket 18: option username 42fa7a62-1420-4169-ad00-53c3481dbe5b 19: option password b71b3c88-51e0-464c-8b09-14b661fdb4d3 20: option send-gids true 21: end-volume 22: 23: volume backups-replicate-0 24: type cluster/replicate 25: subvolumes backups-client-0 backups-client-1 26: end-volume 27: 28: volume backups-dht 29: type cluster/distribute 30: subvolumes backups-replicate-0 31: end-volume 32: 33: volume backups-write-behind 34: type performance/write-behind 35: subvolumes backups-dht 36: end-volume 37: 38: volume backups 39: type debug/io-stats 40: option latency-measurement off 41: option count-fop-hits off 42: subvolumes backups-write-behind 43: end-volume 44: 45: volume nfs-server 46: type nfs/server 47: option rpc-auth.auth-glusterfs on 48: option rpc-auth.auth-unix on 49: option rpc-auth.auth-null on 50: option rpc-auth.ports.insecure on 51: option rpc-auth-allow-insecure on 52: option transport-type socket 53: option transport.socket.listen-port 2049 54: option nfs.dynamic-volumes on 55: option nfs.nlm on 56: option nfs.drc off 57: option rpc-auth.addr.backups.allow * 58: option nfs3.backups.volume-id 71a26ea6-632d-4a1d-8610-e782ce2a5100 59: option nfs.backups.disable off 60: option nfs.logs.disable off 61: option nfs.users.disable off 62: subvolumes backups 63: end-volume 64: +------------------------------------------------------------------------------+
[2016-03-07 16:03:14.318157] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-backups-client-0: Connected to backups-client-0, attached to remote volume '/zfspool/glusterfs/backups/data'. [2016-03-07 16:03:14.318276] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-backups-client-0: Server and Client lk-version numbers are not same, reopening the fds [2016-03-07 16:03:14.318400] I [MSGID: 108005] [afr-common.c:3841:afr_notify] 0-backups-replicate-0: Subvolume 'backups-client-0' came back up; going online. [2016-03-07 16:03:14.318470] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-backups-client-0: Server lk version = 1 [2016-03-07 16:03:14.496642] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-backups-client-1: changing port to 49152 (from 0) [2016-03-07 16:03:14.498394] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-backups-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-03-07 16:03:14.505580] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-backups-client-1: Connected to backups-client-1, attached to remote volume '/zfspool/glusterfs/backups/data'. [2016-03-07 16:03:14.505627] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-backups-client-1: Server and Client lk-version numbers are not same, reopening the fds [2016-03-07 16:03:14.506210] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-backups-client-1: Server lk version = 1 [2016-03-07 16:03:14.507836] I [MSGID: 108031] [afr-common.c:1782:afr_local_discovery_cbk] 0-backups-replicate-0: selecting local read_child backups-client-0
On 3/6/16 9:13 PM, Jiffin Tony Thottan wrote:
On 05/03/16 07:12, Mark Selby wrote:
I am trying to use GlusterFS as a general purpose NFS file server. I have tried using the FUSE client but the performance fall off vs NFS is quite large
Both the client and the server are Ubuntu 14.04.
I am using Gluster 3.6.9 because of the FUSE performance issues that have been reported with 3.7.8 (see https://bugzilla.redhat.com/show_bug.cgi?id=1309462)
I am having serious issues with a generic NFS client as shown by the issues below. Basically most FOPs are giving me a Remote I/O error.
I would not think I was 1st person to see these issues - but my Google Fu is not working.
Any and all help would be much appreciated
BTW - These operation against a plain Linux NFS server work fine.
root@dc1strg001x /var/log 448# gluster volume status Status of volume: backups Gluster process Port Online Pid ------------------------------------------------------------------------------
Brick dc1strg001x:/zfspool/glusterfs/backups/data 49152 Y 6462 Brick dc1strg002x:/zfspool/glusterfs/backups/data 49152 Y 6382 NFS Server on localhost 2049 Y 6619 Self-heal Daemon on localhost N/A Y 6626 NFS Server on dc1strg002x 2049 Y 6502 Self-heal Daemon on dc1strg002x N/A Y 6509
root@vc1test001 /root 735# mount -o vers=3 -t nfs dc1strg001x:/backups /mnt/backups_nfs
root@vc1test001 /mnt/backups_nfs 737# dd if=/dev/zero of=testfile bs=16k count=16384 16384+0 records in 16384+0 records out 268435456 bytes (268 MB) copied, 2.46237 s, 109 MB/s
root@vc1test001 /mnt/backups_nfs 738# rm testfile
root@vc1test001 /mnt/backups_nfs 739# dd if=/dev/zero of=testfile bs=16k count=16384 dd: failed to open ~testfile~: Remote I/O error
root@vc1test001 /var/tmp 743# rsync -av testfile /mnt/backups_nfs/ sending incremental file list testfile rsync: mkstemp "/mnt/backups_nfs/.testfile.bzg47C" failed: Remote I/O error (121)
sent 1,074,004,056 bytes received 121 bytes 165,231,411.85 bytes/sec total size is 1,073,741,824 speedup is 1.00 rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.0]
Can you please provide the volume configuration(gluster vol info ) and log file for nfs server which u mounted (/var/log/glusterfs)
-- Jiffin
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxxhttp://www.gluster.org/mailman/listinfo/gluster-users
|