Hi I am using Gluster 2.06 on my servers and clients. My setup is a replicated with cluster/replicate and then distribued with DHT. My bricks are paired. Brick1 on server1 is replicated to Brick2 on server 2 and so on. My problem is that it seems that self heal only works on the secondary bricks (Brick2 in this example). For a test I can rm a file from Brick2 behind GlusterFS's back, run a self heal on my dfs with ls -lR, and the file will be recovered on Brick2 as expected. However if I remove a file from Brick1 and run the same self heal, it doesn't work. Not only that, but the file is now gone from my dfs. (It is still stored on Brick2) I have tried using option "lookup-unhashed on" and option "lookup-unhashed yes". (There seem to be some confusion to what is the correct syntax in the docs. The example<http://gluster.com/community/documentation/index.php/Translators/cluster/distribute>uses "yes" and the text below says default is "on"). Is this the expected behavior? Please advise on how to fix this. I want a self heal to fix both my primary and secondary bricks. Regards Roland Rabben