Hi! I have trivial problem with self healing. Maybe somebody will be able to tell mi what am I doing wrong, and why do the files not heal as I expect. Configuration: Servers: two nodes A, B --------- volume posix type storage/posix option directory /ext3/glusterfs13/brick end-volume volume brick type features/posix-locks option mandatory on subvolumes posix end-volume volume server type protocol/server option transport-type tcp/server option auth.ip.brick.allow * option auth.ip.brick-ns.allow * subvolumes brick end-volume -------- Client: C ------- volume brick1 type protocol/client option transport-type tcp/client option remote-host A option remote-subvolume brick end-volume volume brick2 type protocol/client option transport-type tcp/client option remote-host B option remote-subvolume brick end-volume volume afr type cluster/afr subvolumes brick1 brick2 end-volume volume iot type performance/io-threads subvolumes afr option thread-count 8 end-volume Scenario: 1. mount remote afr brick on C 2. do some ops 3. stop the server A (to simulate machine failure) 4. wait some time so clock skew beetween A and B is not an issue 5. write file X to gluster mount on C 6. start the server A 7. wait for C to reconnect to A 8. wait some time so clock skew beetween A and B is not an issue 9. touch, read, stat, write to file X, ls the dir in which X is (all on gluser mount on C) And here is the problem. Whatever I do I cant make the file X appear on backend fs on brick A which was down when file X was created. Help is really appricciate. PS. I discussed, similar auto-healing problem on gluster-devel some time ago, and then it magically worked once, so i stopped thinking about it. Today I see it again and as we are willing to use glusterfs in production soon auto-heal functionality is crucial. Regards, Lukasz Osipiuk. -- ?ukasz Osipiuk mailto: lukasz at osipiuk.net