On 15/03/21 5:11 pm, Zenon Panoussis wrote:
Indeed, enabling granular was only possible when there were
0 files to heal. Re-disabling it, however, did not impose this
limitation.
Ah yes, this is expected behavior because even if we disable it, there
should be enough information to do the entry heal in the non-granular way.
I get the same answer on all three nodes. This directory contains
no subdirectories, only files.
Hmm, then the client4_0_mkdir_cbk failures in the glustershd.log must
be for a parallel heal of a directory which contains subdirs.
[root@node01 ~]# find /gfs/gv0/vmail/net/provocation/oracle/Maildir/.Sent/cur/ -type f |wc -l
10264
[root@node02 ~]# find /gfs/gv0/vmail/net/provocation/oracle/Maildir/.Sent/cur/ -type f |wc -l
10604
[root@node03 ~]# find /gfs/gv0/vmail/net/provocation/oracle/Maildir/.Sent/cur/ -type f |wc -l
10603
The figures don't fully add up to 4/343/344, but are very close.
Nothing is is in split-brain, so it simply looks like node01 is
lagging behind the other two.
Are there any file names inside
/gfs/gv0/.glusterfs/indices/entry-changes/011fcc1b-4d90-4c36-86ec-488aaa4db3b8
in any of the bricks? If this heal backlog was introduced when
granular-entry-heal was enabled, it must contain the list of files that
need to be healed.
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users