Re: Files exist, but sometimes are not seen by the clients: "No such file or directory"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I see the volume tank is of type distribute only. If a brick disconnected, then the file present in that brick will not be available. Do you see any connection errors in the mount log? 

---
Aravinda
Kadalu Technologies



---- On Thu, 04 Jan 2024 01:24:55 +0530 Carlo Rodrigues <carlo.rodrigues@xxxxxxxxxxxx> wrote ---

Hello all,

 

We're having problems with files that suddenly stop being seen on the fuse clients.

I couldn't yet find a way to reproduce this. It happens every once in a while.

 

Sometimes you try to ls some file and it can't be found.

When you run ls on the parent directory, it is shown on the output, and, after that, you can access it.

I'm mentioning ls, but the problem also manifests when trying to access the file in any other way, not only when listing it with the ls command.

 

Example: (The file names and location were edited for brevity)

 

1. ls file returns "No such file or directory"

 

$ ls -l /home/RESEARCH/user.x/some_path/idtracker.log

ls: cannot access /home/RESEARCH/user.x/some_path/idtracker.log: No such file or directory

 

2. ls parent directory. The file is shown in the results.

 

$ ls -l /home/RESEARCH/user.x/some_path/

total 334500

-rw-r--r-- 1 user.x group_x 348521772 Jan 19 15:26 file_17.avi

-rw-r--r-- 1 user.x group_x    978252 Jan 19 15:26 file_17.csv

-rw-r--r-- 1 user.x group_x      1185 Jun  5 10:05 idtracker.log

drwxr-xr-x 2 user.x group_x      4096 Jun  2 21:17 segm

 

3. try to ls file again. This time, it is found.

 

$ ls -l /home/RESEARCH/user.x/some_path/idtracker.log

-rw-r--r-- 1 user.x group_x 1185 Jun  5 10:05 /home/RESEARCH/user.x/some_path/idtracker.log

 

 

The glusterfs filesystem is mounted like this

storage:/tank on /nfs/tank type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

 

At the moment, cluster.readdir-optimize was on, cluster.lookup-optimize was on and cluster.lookup-unhashed was off.

 

Now I've changed those 3 options to check if the problem still appears.

 

Current configuration:

 

$ sudo gluster volume info tank

 

Volume Name: tank

Type: Distribute

Volume ID: 2c3ca623-c1df-4b5f-b20e-5222d189f834

Status: Started

Snapshot Count: 0

Number of Bricks: 24

Transport-type: tcp

Bricks:

Brick1: swp-gluster-05:/tank/volume1/brick

Brick2: swp-gluster-06:/tank/volume1/brick

Brick3: swp-gluster-07:/tank/volume1/brick

Brick4: swp-gluster-05:/tank/volume2/brick

Brick5: swp-gluster-06:/tank/volume2/brick

Brick6: swp-gluster-07:/tank/volume2/brick

Brick7: swp-gluster-05:/tank/volume3/brick

Brick8: swp-gluster-06:/tank/volume3/brick

Brick9: swp-gluster-07:/tank/volume3/brick

Brick10: swp-gluster-05:/tank/volume4/brick

Brick11: swp-gluster-06:/tank/volume4/brick

Brick12: swp-gluster-07:/tank/volume4/brick

Brick13: swp-gluster-01:/tank/volume1/brick

Brick14: swp-gluster-02:/tank/volume1/brick

Brick15: swp-gluster-03:/tank/volume1/brick

Brick16: swp-gluster-04:/tank/volume1/brick

Brick17: swp-gluster-01:/tank/volume2/brick

Brick18: swp-gluster-02:/tank/volume2/brick

Brick19: swp-gluster-03:/tank/volume2/brick

Brick20: swp-gluster-04:/tank/volume2/brick

Brick21: swp-gluster-01:/tank/volume3/brick

Brick22: swp-gluster-02:/tank/volume3/brick

Brick23: swp-gluster-03:/tank/volume3/brick

Brick24: swp-gluster-04:/tank/volume3/brick

Options Reconfigured:

cluster.lookup-unhashed: on

cluster.lookup-optimize: off

performance.aggregate-size: 1MB

performance.read-ahead-page-count: 16

performance.nl-cache-timeout: 600

performance.nl-cache: on

network.inode-lru-limit: 200000

performance.md-cache-timeout: 600

performance.cache-invalidation: on

performance.cache-samba-metadata: on

features.cache-invalidation-timeout: 600

features.cache-invalidation: on

performance.write-behind: off

performance.cache-size: 128MB

storage.fips-mode-rchecksum: on

transport.address-family: inet

nfs.disable: on

features.inode-quota: on

features.quota: on

server.event-threads: 32

client.event-threads: 16

cluster.readdir-optimize: off

performance.io-thread-count: 64

performance.readdir-ahead: on

performance.client-io-threads: on

performance.parallel-readdir: disable

performance.read-ahead: on

performance.stat-prefetch: on

performance.open-behind: off

features.quota-deem-statfs: on

 

Has anyone gone through this before?

Thanks in advance.

 

Carlo Rodrigues

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux