getting lots of stale nfs filehandle errors we have 4 nodes in our cluster, clients nfs mount the volume from any node in a round-robin it appears that one node has gone bad. the clients mounting that node can't see the files that the others can see. ls -l gives rubbish for the metadata, and get lots of these lines in the nfs.log: [2011-02-16 15:33:32.538756] I [dht-layout.c:588:dht_layout_normalize] glustervol1-dht: found anomalies in /production/people.1. holes=2 overlaps=0 [2011-02-16 15:33:32.540759] I [dht-layout.c:588:dht_layout_normalize] glustervol1-dht: found anomalies in /production/people.nano. holes=2 overlaps=0 [2011-02-16 15:33:32.543682] I [dht-layout.c:588:dht_layout_normalize] glustervol1-dht: found anomalies in /production/people.2. holes=2 overlaps=0 [2011-02-16 15:33:32.507428] I [dht-layout.c:588:dht_layout_normalize] glustervol1-dht: found anomalies in /production/skeleton. holes=2 overlaps=0 [2011-02-16 15:33:32.509440] I [dht-layout.c:588:dht_layout_normalize] glustervol1-dht: found anomalies in /production/svn. holes=2 overlaps=0 [2011-02-16 15:33:32.511275] I [dht-layout.c:588:dht_layout_normalize] glustervol1-dht: found anomalies in /production/tempo. holes=2 overlaps=0 Any ideas? Thanks David gluster 3.1.2 g3:/var/log/glusterfs # gluster volume info Volume Name: glustervol1 Type: Distributed-Replicate Status: Started Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: Brick1: g1:/mnt/glus1 Brick2: g2:/mnt/glus1 Brick3: g3:/mnt/glus1 Brick4: g4:/mnt/glus1 Brick5: g1:/mnt/glus2 Brick6: g2:/mnt/glus2 Brick7: g3:/mnt/glus2 Brick8: g4:/mnt/glus2 Options Reconfigured: diagnostics.dump-fd-stats: on diagnostics.latency-measurement: off network.ping-timeout: 20 performance.write-behind-window-size: 1mb performance.cache-size: 1gb performance.stat-prefetch: 1 -- David Lloyd V Consultants www.v-consultants.co.uk <http://www.v-consultants.co.uk>