> Hmm, then the client4_0_mkdir_cbk failures in the glustershd.log > must be for a parallel heal of a directory which contains subdirs. Running volume heal info gives the following results: node01: 3 gfids and one named directory, namely Maildir/.Sent/cur. Running gfid2dirname.sh on the 3 gfids returns one error and two unrelated directories. node02: 2 gfids, one named directory, namely Maildir/.Sent/cur, and a whole lot of files in Maildir/.Sent/cur. Running gfid2dirname.sh on the 2 gfids returns the same two unrelated directories as on node01. node03: A whole list of gfids, no named files or directories. Running gfid2dirname.sh on the gfids returns a long list of errors, plus Maildir/.Sent/cur and the same two unrelated directories. I don't know how to interpret this, but it surely looks as if Maildir/.Sent/cur needs to be healed on all three bricks. That can't be possible, logically it doesn't make sense, because if not even one brick has the data of an object, that object should not exist at all. > Are there any file names inside > /gfs/gv0/.glusterfs/indices/entry-changes/011fcc1b-4d90-4c36-86ec-488aaa4db3b8 > in any of the bricks? node01: empty. node02: 388 filenames, no directories. node03: 394 filenames, no directories. Would simply re-copying the entire Maildir/.Sent/cur and its contents to the volume solve the problem or make it worse? ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users