Adrian, please post the slave side logs. I see you use a file:// slave, so to produce them, you need a running glusterd on the slave too. So the clearest procedure to follow would be like this: - stop geo-rep with the instance producing this - delete master side logs (to get rid of old data) - start glusterd on slave box (if it was not running) - start the geo-rep session again, wait for fault to come up - collect newly produced logs both on master and slave side and post them. How to locate slave side logs is described here: http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Troubleshooting_Geo-replication Csaba On Thu, Jun 30, 2011 at 4:43 PM, Adrian Carpenter <tac12 at wbic.cam.ac.uk> wrote: > Yes I can ssh between all the boxes without password as root. > > > On 30 Jun 2011, at 15:27, Csaba Henk wrote: > >> t seems that the connection gets dropped (or not even able to >> establish). Is the ssh auth set up properly from the second volume?