Re: Glusterfs nfs mounts not showing directories

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Do you see any issues in the logs ?

Is it only with Ganesha ?

Best Regards,
Strahil Nikolov

On Mon, Sep 6, 2021 at 21:37, John Cholewa
<jcholewa@xxxxxxxxx> wrote:
My distributed volume had an issue on Friday which required a reboot
of the primary node. After this, I'm having a strange issue: When I
have the volume mounted via ganesha-nfs, using either the primary node
itself or a random workstation on the network, I'm seeing files from
both volumes, but I'm not seeing any directories at all. It's just a
listing of the files. But I *can* list the contents of a directory if
I know it exists. Similarly, that will show the files (in both nodes)
of that directory, but it will show no subdirectories. Example:

$ ls -F /mnt

$ ls -F /mnt/flintstone
test test1 test2 test3

$ ls -F /mnt/flintstone/wilma
file1 file2 file3

I've tried restarting glusterd on both nodes and rebooting the other
node as well. Mount options in fstab are defaults,_netdev,nofail. I
tried temporarily disabling the firewall in case that was a
contributing factor.

This has been working pretty well for over two years, and it's
survived system updates and reboots on the nodes, and there hasn't
been a recent software update that would have triggered this. The data
itself appears to be fine. 'gluster peer status' on each node shows
that the other is connected.

What's a good way to further troubleshoot this or to tell gluster to
figure itself out?  Would "gluster volume reset"  bring the
configuration to its original state without damaging the data in the
bricks?  Is there something I should look out for in the logs that
might give a clue?


# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.4 LTS
Release:        18.04
Codename:      bionic

# gluster --version
glusterfs 7.5
Repository revision: git://
Copyright (c) 2006-2016 Red Hat, Inc. <>
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

# gluster volume status
Status of volume: gv0
Gluster process                            TCP Port  RDMA Port  Online  Pid
Brick yuzz:/gfs/brick1/gv0                  N/A      N/A        Y      2909
Brick wum:/gfs/brick1/gv0                  49152    0          Y      2885

Task Status of Volume gv0
There are no active volume tasks

# gluster volume info
Volume Name: gv0
Type: Distribute
Volume ID: dcfdeed9-8fe9-4047-b18a-1a908f003d7f
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Brick1: yuzz:/gfs/brick1/gv0
Brick2: wum:/gfs/brick1/gv0
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
features.cache-invalidation: on
cluster.readdir-optimize: off
performance.parallel-readdir: off
performance.cache-size: 8GB
network.inode-lru-limit: 1000000
performance.nfs.stat-prefetch: off

# gluster pool list
UUID                                    Hostname        State
4b84240e-e73a-46da-9271-72f6001a8e18    wum            Connected
7de76707-cd99-4916-9c6b-ac6f26bda373    localhost      Connected

Output of gluster get-state:
MYUUID: 7de76707-cd99-4916-9c6b-ac6f26bda373
op-version: 31302

[Global options]

Peer1.primary_hostname: wum
Peer1.uuid: 4b84240e-e73a-46da-9271-72f6001a8e18
Peer1.state: Peer in Cluster
Peer1.connected: Connected

[Volumes] gv0 dcfdeed9-8fe9-4047-b18a-1a908f003d7f
Volume1.type: Distribute
Volume1.transport_type: tcp
Volume1.status: Started
Volume1.brickcount: 2
Volume1.Brick1.path: yuzz:/gfs/brick1/gv0
Volume1.Brick1.hostname: yuzz
Volume1.Brick1.port: 0
Volume1.Brick1.rdma_port: 0
Volume1.Brick1.status: Started
Volume1.Brick1.spacefree: 72715274395648Bytes
Volume1.Brick1.spacetotal: 196003244277760Bytes
Volume1.Brick2.path: wum:/gfs/brick1/gv0
Volume1.Brick2.hostname: wum
Volume1.snap_count: 0
Volume1.stripe_count: 1
Volume1.replica_count: 1
Volume1.subvol_count: 2
Volume1.arbiter_count: 0
Volume1.disperse_count: 0
Volume1.redundancy_count: 0
Volume1.quorum_status: not_applicable
Volume1.snapd_svc.online_status: Offline
Volume1.snapd_svc.inited: True 00000000-0000-0000-0000-000000000000
Volume1.rebalance.status: not_started
Volume1.rebalance.failures: 0
Volume1.rebalance.skipped: 0
Volume1.rebalance.lookedup: 0
Volume1.rebalance.files: 0 0Bytes
Volume1.time_left: 0
Volume1.gsync_count: 0
Volume1.options.nfs.disable: on
Volume1.options.transport.address-family: inet
Volume1.options.features.cache-invalidation: on
Volume1.options.cluster.readdir-optimize: off
Volume1.options.performance.parallel-readdir: off
Volume1.options.performance.cache-size: 8GB 1000000
Volume1.options.performance.nfs.stat-prefetch: off

[Services] glustershd
svc1.online_status: Offline nfs
svc2.online_status: Offline bitd
svc3.online_status: Offline scrub
svc4.online_status: Offline quotad
svc5.online_status: Offline

Base port: 49152
Last allocated port: 49152

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Gluster-users mailing list

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Gluster-users mailing list

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux