Hi all! We're running a 4 node cluster on Gluster 3.1.3 currently. After a staged server update/reboot only ONE of the 4 servers shows some mismatches in the file attributes. It shows that 28 files differ from /0x000000000000000000000000/ the "all-in-sync" state. No sync or self heal has happened within the last 16h, we checked last night, this morning and now. Even after opening each file with /od -c <file> | head -2/ the self-heal/sync process doesn't seem to start. There are no errors in the logs, how can I check that things ARE happening correctly? /0 root at de-dc1-c1-pserver5:~ # getfattr -R -d -e hex -m "trusted.afr." /mnt/gluster/brick?/storage | grep -v 0x000000000000000000000000 | grep -A1 -B1 trusted getfattr: Removing leading '/' from absolute path names # file: mnt/gluster/brick0/storage/images/1831/db88d55e-3282-c7c6-d1dd-ec41a665011f/hdd-images/8987 trusted.afr.storage0-client-0=0x000000000100000000000000 -- # file: mnt/gluster/brick0/storage/images/1831/92f63f17-eb6c-8dba-2b9c-2e9cc52a8b2c/hdd-images/8786 trusted.afr.storage0-client-0=0x000000000100000000000000 -- # file: mnt/gluster/brick0/storage/images/1831/6ae6c5eb-e6e2-4dfe-7bb3-75c622910f27/hdd-images/9113 trusted.afr.storage0-client-0=0x000000000100000000000000 -- # file: mnt/gluster/brick0/storage/images/1828/4e5fd475-19b3-c9a7-1ad0-e4da528e6dbd/iso-images/11091 trusted.afr.storage0-client-0=0x000000000200000000000000 -- # file: mnt/gluster/brick0/storage/images/1853/3df576b8-4206-e45a-33d7-433d56b700f0/iso-images/1957 trusted.afr.storage0-client-0=0x000000000200000000000000 # file: mnt/gluster/brick0/storage/images/1853/3df576b8-4206-e45a-33d7-433d56b700f0/iso-images/1960 trusted.afr.storage0-client-0=0x000000000200000000000000 -- # file: mnt/gluster/brick0/storage/images/2003/5e9b2bdc-a158-796f-6d81-60f39aee5137/hdd-images/9772 trusted.afr.storage0-client-0=0x000000000200000000000000 -- # file: mnt/gluster/brick0/storage/images/1962/5c4cd738-bb56-d723-f001-0428e55ea81b/iso-images/8110 trusted.afr.storage0-client-0=0x000000000200000000000000 -- # file: mnt/gluster/brick0/storage/images/1787/a190f24c-ed40-5642-4226-f00726dfc99f/iso-images/9837 trusted.afr.storage0-client-0=0x000000000200000000000000 -- # file: mnt/gluster/brick0/storage/images/1787/ad8179fd-4f00-086c-955f-c2e469809e64/iso-images/2854 trusted.afr.storage0-client-0=0x000000000200000000000000 # file: mnt/gluster/brick0/storage/images/1787/ad8179fd-4f00-086c-955f-c2e469809e64/iso-images/10703 trusted.afr.storage0-client-0=0x000000000200000000000000 -- # file: mnt/gluster/brick0/storage/images/1787/1782da17-059c-d159-373a-9ad9f5f9289f/iso-images/8519 trusted.afr.storage0-client-0=0x000000000200000000000000 -- # file: mnt/gluster/brick0/storage/images/1787/aed64fba-2372-6f06-0690-be46136464a0/iso-images/10258 trusted.afr.storage0-client-0=0x000000000200000000000000 # file: mnt/gluster/brick0/storage/images/1787/aed64fba-2372-6f06-0690-be46136464a0/iso-images/10452 trusted.afr.storage0-client-0=0x000000000200000000000000 -- # file: mnt/gluster/brick0/storage/images/1787/cd6089d7-c2cd-a5e1-2130-770fe028b5e3/iso-images/10511 trusted.afr.storage0-client-0=0x000000000200000000000000 -- # file: mnt/gluster/brick0/storage/images/1834/02d04e16-40db-d244-aa4e-3e53cfaa2405/iso-images/504 trusted.afr.storage0-client-0=0x000000000200000000000000 -- # file: mnt/gluster/brick0/storage/images/1978/21527903-ca4e-4715-b40b-30c150f86d44/iso-images/9275 trusted.afr.storage0-client-0=0x000000000100000000000000 # file: mnt/gluster/brick0/storage/images/1978/21527903-ca4e-4715-b40b-30c150f86d44/iso-images/9511 trusted.afr.storage0-client-0=0x000000000100000000000000 -- # file: mnt/gluster/brick1/storage/images/1828/fc701d50-0b29-7827-89c0-77134ba96205/iso-images/9442 trusted.afr.storage0-client-2=0x000000000200000000000000 -- # file: mnt/gluster/brick1/storage/images/1878/875ed0c0-38b3-4552-7f1f-49a619996e5c/hdd-images/5758 trusted.afr.storage0-client-3=0x00000000ffffffff00000000 -- # file: mnt/gluster/brick1/storage/images/1787/ad8179fd-4f00-086c-955f-c2e469809e64/iso-images/2857 trusted.afr.storage0-client-2=0x000000000200000000000000 # file: mnt/gluster/brick1/storage/images/1787/ad8179fd-4f00-086c-955f-c2e469809e64/iso-images/10773 trusted.afr.storage0-client-2=0x000000000200000000000000 # file: mnt/gluster/brick1/storage/images/1787/ad8179fd-4f00-086c-955f-c2e469809e64/iso-images/10648 trusted.afr.storage0-client-2=0x000000000200000000000000 -- # file: mnt/gluster/brick1/storage/images/2003/8d7880ff-e7b2-3996-3fa3-ddb8022ca403/iso-images/9979 trusted.afr.storage0-client-2=0x000000000200000000000000 -- # file: mnt/gluster/brick1/storage/images/2003/116a4a8f-c8c2-6f70-1256-b29477d65e72/iso-images/10587 trusted.afr.storage0-client-2=0x000000000200000000000000 -- # file: mnt/gluster/brick1/storage/images/1956/ff4bbfd7-3b1a-00da-c901-c35cd967b600/iso-images/6815 trusted.afr.storage0-client-2=0x000000000200000000000000 -- # file: mnt/gluster/brick1/storage/images/1831/d56262c6-7f53-ec1b-baaf-15ceee9914ac/iso-images/9196 trusted.afr.storage0-client-2=0x000000000100000000000000 -- # file: mnt/gluster/brick1/storage/images/1959/cd55c5f3-9aa1-bfd9-99a0-01c13a7d8559/hdd-images/9324 trusted.afr.storage0-client-2=0x000000000200000000000000/ All other three server are happy, just pserver5 seems to be "stuck" ... BTW; is there a way to relate the client number to a real machine? So, how do I check which machine is "client-2" or ""client-0"? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20110428/b20352aa/attachment.htm>