Hi, Xsync might work well for you as it doesn't sync deletes and renames. I see the below patch in v3.6.1 branch. Well I don't know why you are not seeing that patch in your setup. It's just a simple python file patch. It can be directly applied and use changelog as change-detector to sync deletes and renames. Thanks and Regards, Kotresh H R ----- Original Message ----- From: "Gong XiaoHui" <xhgong@xxxxxxxxxxx> To: "Kotresh Hiremath Ravishankar" <khiremat@xxxxxxxxxx> Cc: "gluster-devel@xxxxxxxxxxx" <gluster-devel@xxxxxxxxxxx> Sent: Friday, December 12, 2014 6:42:51 PM Subject: 答复: some issues about geo-replication and gfapi I don't have this patch http://review.gluster.org/#/c/8865/? I do the following steps, then the geo-replication worked. gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> stop gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config change-detector xsync gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> start -----邮件原件----- 发件人: Kotresh Hiremath Ravishankar [mailto:khiremat@xxxxxxxxxx] 发送时间: 2014年12月11日 21:38 收件人: Gong XiaoHui 抄送: Yang Ming; gluster-devel@xxxxxxxxxxx 主题: Re: some issues about geo-replication and gfapi Hi, 1. Could you confirm you have this patch http://review.gluster.org/#/c/8865/? Thanks and Regards, Kotresh H R ----- Original Message ----- From: "Gong XiaoHui" <xhgong@xxxxxxxxxxx> To: khiremat@xxxxxxxxxx Cc: "Yang Ming" <myang@xxxxxxxxxxx>, gluster-devel@xxxxxxxxxxx Sent: Thursday, December 11, 2014 8:00:52 AM Subject: some issues about geo-replication and gfapi The attachment is the logs mastervolname: geo2_master_vol slaveHost: testgfs-3 (10.100.7.85) slavevolname: geo2_slave_vol os:SUSE Linux Enterprise Server 11.3 -----邮件原件----- 发件人: Kotresh Hiremath Ravishankar [mailto:khiremat@xxxxxxxxxx] 发送时间: 2014年12月9日 18:56 收件人: Gong XiaoHui 抄送: gluster-devel@xxxxxxxxxxx 主题: Re: some issues about geo-replication and gfapi Hi, To answer your questions in order. 1. Could you please provide geo-replication slave logs? 2. Steps to configure geo-replication different for glusterfs version <=3.4 and >=3.5 For gluster-geo-replication <=3.4 http://www.gluster.org/community/documentation/index.php/HowTo:geo-replication For gluster-geo-replication >=3.5 https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md Upgrade steps from gluster-geo-replication-3.4 to 3.5 http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5 Thanks and Regards, Kotresh H R ----- Original Message ----- From: "Gong XiaoHui" <xhgong@xxxxxxxxxxx> To: "gluster-devel@xxxxxxxxxxx" <gluster-devel@xxxxxxxxxxx> Sent: Monday, December 8, 2014 11:15:59 AM Subject: some issues about geo-replication and gfapi Hi I have two questions with glusterfs: 1.When I use libgfapi to write a file to a volume, which is configured as a geo-replication masterVol. It cannot work well, the geo status is faulty, following is the log: The glusterfs version is 3.6.1 [2014-12-08 13:12:11.708616] I [master(/data_xfs/geo2-master):1330:crawl] _GMaster: finished hybrid crawl syncing, stime: (1418010314, 0) [2014-12-08 13:12:11.709706] I [master(/data_xfs/geo2-master):480:crawlwrap] _GMaster: primary master with volume id d220647c-5730-4cef-a89b-932470c914d2 ... [2014-12-08 13:12:11.735719] I [master(/data_xfs/geo2-master):491:crawlwrap] _GMaster: crawl interval: 3 seconds [2014-12-08 13:12:11.811722] I [master(/data_xfs/geo2-master):1182:crawl] _GMaster: slave's time: (1418010314, 0) [2014-12-08 13:12:11.840826] E [repce(/data_xfs/geo2-master):207:__call__] RepceClient: call 8318:139656072095488:1418015531.84 (entry_ops) failed on peer with OSError [2014-12-08 13:12:11.840990] E [syncdutils(/data_xfs/geo2-master):270:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 164, in main main_i() File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 643, in main_i local.service_loop(*[r for r in [remote] if r]) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1344, in service_loop g2.crawlwrap() File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 529, in crawlwrap self.crawl(no_stime_update=no_stime_update) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1194, in crawl self.process(changes) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 946, in process self.process_change(change, done, retry) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 910, in process_change self.slave.server.entry_ops(entries) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 226, in __call__ return self.ins(self.meth, *a) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 208, in __call__ raise res OSError: [Errno 12] Cannot allocate memory [2014-12-08 13:12:11.842178] I [syncdutils(/data_xfs/geo2-master):214:finalize] <top>: exiting. [2014-12-08 13:12:11.843421] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF. [2014-12-08 13:12:11.843682] I [syncdutils(agent):214:finalize] <top>: exiting. [2014-12-08 13:12:12.103255] I [monitor(monitor):222:monitor] Monitor: worker(/data_xfs/geo2-master) died in startup phase 2.Another question, I cannot configure a geo-replication in 3.6.1 with the method in 3.4.1. Any response is appreciate. 产品及时服务:请在 Wind 资讯终端上按 “ F1 ” 键或致电客服专线 400-820-Wind(9463) --------------------------------------------------------------- 宫晓辉 技术 X01 部 上海万得信息技术股份有限公司( Wind 资讯) Shanghai Wind Information Co., Ltd. 上海市浦东新区福山路 33 号建工大厦 9 楼 200120 9/F Jian Gong Mansion,33 Fushan Road, Pudong New Area, Shanghai, P.R.C. 200120 Tel: (0086 21)6888 2280*8310 Fax: (0086 21)6888 2281 Email: xhgong@xxxxxxxxxxx<mailto:xhgong@xxxxxxxxxxx> http://www.wind.com.cn<http://www.wind.com.cn/> _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx<mailto:Gluster-devel@xxxxxxxxxxx> http://supercolony.gluster.org/mailman/listinfo/gluster-devel _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-devel