Hi all,
Running into a problem with my gluster here in a LAB env. Production is a slightly different build(distributed replicated with multiple arbiter bricks) and I don't see the same errors...yet. I only seem to have this problem on 2 client vm's that I ran "setfattr -n trusted.io-stats-dump -v output_file_id mount_point" on while trying to test for latency issues. Curious if anyone has run into a similar situation with the bad file descriptor mess below and knows what I'm doing wrong or missing, besides everything...
Running into a problem with my gluster here in a LAB env. Production is a slightly different build(distributed replicated with multiple arbiter bricks) and I don't see the same errors...yet. I only seem to have this problem on 2 client vm's that I ran "setfattr -n trusted.io-stats-dump -v output_file_id mount_point" on while trying to test for latency issues. Curious if anyone has run into a similar situation with the bad file descriptor mess below and knows what I'm doing wrong or missing, besides everything...
I had built this gluster in one location, only to receive notice months later the clients were now going to be connecting from another location (different vcenter but with 10G backend). It's basically storing hundreds of thousands of what are usually small flat files.
Ran into what seemed to be latency issues(4+sec response) even doing something as simple as an LS of a directory mounted with only a few folders in it mounted on the remote clients. Granted some of those folders have thousands of small files... That led me down the setfattr path testing latency.
Reads/writes on that directory yesterday evening after the setfattr was set was fine(albeit slightly slow) and I got a latency report in /var/run/gluster/logfilename.
Reads/writes on that directory yesterday evening after the setfattr was set was fine(albeit slightly slow) and I got a latency report in /var/run/gluster/logfilename.
I figured we can work out the latency issues, but the problem encountered this morning is that I now cannot create a file via touch, dd, vi etc... on that same mount point without the following errors. This happens now with a client in both the same location as the gluster and a remote client.
"# touch /mnt/stor/test3
touch: failed to close '/mnt/stor/test3': Bad file descriptor"
Yet the file actually gets created...
# ls -al /mnt/stor/test3.test
-rw-r--r--. 1 root root 0 Feb 14 12:39 /mnt/stor/test3
If I mount the volume directly on one of the gluster servers themselves and create a file that way(via dd, touch, etc...), I receive no error...
I've tried unmounting on the clients and remounting. No effect. I even tried rebooting a client. No effect. As far as I can tell from the gluster side, the gluster volume status, peer status, etc... all think the gluster is fine. Thoughts/ideas?
I also see the following in the /var/log/gluster/mountpoint.log file on the clients for each attempt to create a file:
Similar errors for the Client in both same location as the gluster and a remote client:
Similar errors for the Client in both same location as the gluster and a remote client:
"[2023-02-14 16:23:27.705871] I [MSGID: 122062] [ec.c:334:ec_up] 0-stor-disperse-1: Going UP
[2023-02-14 16:23:27.707347] I [fuse-bridge.c:5271:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.33
[2023-02-14 16:23:27.707374] I [fuse-bridge.c:5899:fuse_graph_sync] 0-fuse: switched to graph 0
[2023-02-14 16:23:45.648560] E [MSGID: 122066] [ec-common.c:1273:ec_prepare_update_cbk] 0-stor-disperse-0: Unable to get config xattr. FOP : 'FXATTROP' failed on gfid 864dbaf8-1012-4b79-afe4-89de0ace2628 [No data available]
[2023-02-14 16:23:45.648620] W [fuse-bridge.c:1697:fuse_setattr_cbk] 0-glusterfs-fuse: 98: SETATTR() /test8 => -1 (No data available)
[2023-02-14 16:23:45.648653] E [MSGID: 122034] [ec-common.c:706:ec_child_select] 0-stor-disperse-0: Insufficient available children for this request (have 0, need 2). FOP : 'FXATTROP' failed on gfid 864dbaf8-1012-4b79-afe4-89de0ace2628
[2023-02-14 16:23:45.648663] E [MSGID: 122037] [ec-common.c:2320:ec_update_size_version_done] 0-stor-disperse-0: Failed to update version and size. FOP : 'FXATTROP' failed on gfid 864dbaf8-1012-4b79-afe4-89de0ace2628 [Input/output error]
[2023-02-14 16:23:45.648770] E [MSGID: 122077] [ec-generic.c:204:ec_flush] 0-stor-disperse-0: Failing FLUSH on 864dbaf8-1012-4b79-afe4-89de0ace2628 [Bad file descriptor]
[2023-02-14 16:23:45.649022] W [fuse-bridge.c:1945:fuse_err_cbk] 0-glusterfs-fuse: 99: FLUSH() ERR => -1 (Bad file descriptor)
[2023-02-14 16:23:45.648996] E [MSGID: 122077] [ec-generic.c:204:ec_flush] 0-stor-disperse-0: Failing FLUSH on 864dbaf8-1012-4b79-afe4-89de0ace2628 [Bad file descriptor]"
Gluster volume info sans hostname/ip specifics:
# gluster volume info stor
Volume Name: stor
Type: Distributed-Disperse
Volume ID: 6e94e3ce-cc15-494d-a2ad-3729e7589cdd
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: stor01:/data/brick01a/brick
Brick2: stor02:/data/brick02a/brick
Brick3: stor03:/data/brick03a/brick
Brick4: stor01:/data/brick01b/brick
Brick5: stor02:/data/brick02b/brick
Brick6: stor03:/data/brick03b/brick
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
features.ctime: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
Thanks,
Brad
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users