Hi Glusterfs Users! I have got one replicated volume with two bricks: s1 ~ # gluster volume info Volume Name: data-ns Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: s1:/mnt/gluster/data-ns Brick2: s2:/mnt/gluster/data-ns Options Reconfigured: performance.cache-refresh-timeout: 1 performance.io-thread-count: 32 auth.allow: 10.* performance.cache-size: 1073741824 There are 5 clients which have got mounted volume from s1 server. We've face a hardware failure on s2 box for about one week. During that time the s2 box was down. All read writes operations went to s1. Now I would like to synchronize all files on s2 which is operable. I have started Glusterfs Server and executed self healing process("find with stat"on the glusterfs mount from s2 box). During the replication process I have faced very strange behaviour of Glusterfs. Some of clients have tried to get lots of files from s2 server, but those files did not exist or have got 0 bytes size. It caused lots of "disk wait" on the web servers (clients which have got mounted volume from s1) and finally 503 http response had been sent. My question is, how to avoid serving files from s2 box until all files would be replicated correctly from s1 server? I have installed Glusters 3.2.6-1 from Debian repository. Thank you a lot in advance, Jimmy, -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20120509/7dfbd6d6/attachment.htm>