Hi Olaf,
Thanks so much for sharing this, it's hugely helpful, if only to make me feel less like I'm going crazy. I'll see if theres anything I can add to the bug report. I'm trying to develop a test to reproduce the issue now.
We're running this in a sort of interactive HPC environment, so these error are a bit hard for us to systematically handle, and they have a tendency to be quite disruptive to folks work.
I've run into other issues with sharding as well, such as this:
https://lists.gluster.org/pipermail/gluster-users/2019-October/037241.html
I'm wondering then, if maybe sharding isn't quite stable yet and it's more sensible for me to just disable this feature for now? I'm not quite sure what other implications that might have but so far all the issues I've run into so far as a new gluster user
seem like they're related to shards.
Thanks,
Tim
From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
Sent: Wednesday, November 27, 2019 9:50 AM To: Timothy Orme <torme@xxxxxxxxxxxx> Cc: gluster-users <gluster-users@xxxxxxxxxxx> Subject: [EXTERNAL] Re: Stale File Handle Errors During Heavy Writes Hi Tim,
i've been suffering from this also for a long time, not sure if it's exact the same situation since your setup is different. But it seems similar.
i've filed this bug report; https://bugzilla.redhat.com/show_bug.cgi?id=1732961 for
which you might be able to enrich.
To solve the stale files i've made this bash script; https://gist.github.com/olafbuitelaar/ff6fe9d4ab39696d9ad6ca689cc89986 (it's
slightly outdated) which you could use as inspiration, it basically removes the stale files as suggested here; https://lists.gluster.org/pipermail/gluster-users/2018-March/033785.html .
Please be aware the script won't work if you have 2 (or more) bricks of the same volume on the same server (since it always takes the first path found).
I invoke the script via ansible like this (since the script needs to run on all bricks);
- hosts: host1,host2,host3
tasks: - shell: 'bash /root/clean-stale-gluster-fh.sh --host="{{ intif.ip | first }}" --volume=ovirt-data --backup="/backup/stale/gfs/ovirt-data" --shard="{{ item }}" --force' with_items: - 1b0ba5c2-dd2b-45d0-9c4b-a39b2123cc13.14451 fortunately for me the issue seems to be disappeared, since it's now about 1 month i received one, while before it was about every other day.
The biggest thing the seemed to resolve it was more disk space. while before there was also plenty the gluster volume was at about 85% full, and the individual disk had about 20-30% free of 8TB disk array, but had servers in the mix with smaller disk array's
but with similar available space (in percents). i'm now at much lower percentage.
So my latest running theory is that it has something todo with how gluster allocates the shared's, since it's based on it's hash it might want to place it in a certain sub-volume, but than comes to the conclusion it has not enough space there, writes a
marker to redirect it to another sub-volume (thinking this is the stale file). However rebalances don't fix this issue. Also this still doesn't seem explain that most stale files always end up in the first sub-volume.
Unfortunate i've no proof this is actually the root cause, besides that the symptom "disappeared" once gluster had more space to work with.
Best Olaf
Op wo 27 nov. 2019 om 02:38 schreef Timothy Orme <torme@xxxxxxxxxxxx>:
|
________ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users