split brain due to conflicting trusted.glusterfs.mdata xattr?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear list,

Today I received a "Directory not empty" error while trying to remove a directory from the FUSE mount of a distribute–replicate volume. Looking in the directory I found a few files with question marks:

-?????????? ? ?      ?         ?            ? ._Log.out

I checked the volume heal info and there were 0 entries for healing. All bricks healthy, self-heal daemons up, etc. Looking closer at one of these files on the backend bricks I found that the file had the same sha256sum within the replica set, and correctly does not exist in any other sets. The only issue I could find was that the parent directory on several bricks had a different trusted.glusterfs.mdata xattr. I removed the directory from each of the bricks on the other replica sets and then issued `stat` on the file on the FUSE mount, and it worked. Now the parent directory exists with the same mdata on each brick.

So my question is: is this a new type of split brain? I don't mind fixing a few of these manually (especially since I was trying to remove these files anyways), but it would be good to know more. We are using GlusterFS 8.5 on CentOS 7.

Thank you!

--
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux