Hi Samuli, ddOn 2013-03-20, Samuli Heinonen <samppah at neutraali.net> wrote: > > Dear all, > > I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have > much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I > have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following: > > All machines are running CentOS 6.4 and using GlusterFS packages from > http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.0alpha2/EPEL.repo/. > Gluster bricks are using XFS. On slave I have tried ext4 and btrfs. > > 1. Installed slave machine (VM hosted in separate environment) with glusterfs-geo-replication, rsync > and some other packages as needed by dependencies. > 2. Installed glusterfs-geo-replication and rsync packages on GlusterFS server. > 3. Created ssh key on server, saved it to /var/lib/glusterd/geo-replication/secret.pem and copied it to > slave /root/.ssh/authorized_keys > 4. On server ran: > - gluster volume geo-replication vmstorage slave:/backup/vmstorage config remote_gsyncd /usr/libexec/glusterfs/gsyncd > - gluster volume geo-replication vmstorage slave:/backup/vmstorage start [...] > > I thought that maybe it's creating index or something like that let it run for about 30 hours. Still after > that there was no new log messages and no data being transferred to slave. I tried using strace -p 27756 to > see what was going on but there was no output at all. My next thought was that maybe running virtual machines > are causing some trouble so I shut down all VMs and restarted geo-replication but it didn't have any > effect. My last effort was to crete new clean volume without any data in it and try geo-replication with it - > no luck there either. This behavior is confirmed -- it's exactly reproducible. I'll try to get back to you tomorrow with an update. If that won't happen (because not getting any cleverer...) then I can chime back only after 4th of April, I'll be on leave. Regards, Csaba