Hello all. I have an issue when I'm trying to populate a geo-replication volume: [2015-05-25 23:59:26.666712] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------ [2015-05-25 23:59:26.667079] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker [2015-05-25 23:59:26.762124] I [gsyncd(/gluster):532:main_i] <top>: syncing: gluster://localhost:volume1 -> ssh://root@xxxxxxxxxxxxxxxxxxxxxx:gluster://localhost:volume1_slave [2015-05-25 23:59:29.611541] I [master(/gluster):58:gmaster_builder] <top>: setting up xsync change detection mode [2015-05-25 23:59:29.612349] I [master(/gluster):357:__init__] _GMaster: using 'rsync' as the sync engine [2015-05-25 23:59:29.613812] I [master(/gluster):58:gmaster_builder] <top>: setting up changelog change detection mode [2015-05-25 23:59:29.614294] I [master(/gluster):357:__init__] _GMaster: using 'rsync' as the sync engine [2015-05-25 23:59:29.616271] I [master(/gluster):1103:register] _GMaster: xsync temp directory: /var/run/gluster/volume1/ssh%3A%2F%2Froot%40192.168.178.233%3Agluster%3A%2F%2F127.0.0.1%3Avolume1_slave/1077eb0027f1f616115bcb74a330d1c2/xsync [2015-05-25 23:59:29.648611] E [syncdutils(/gluster):240:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/lib/glusterfs/python/syncdaemon/gsyncd.py", line 150, in main main_i() File "/usr/lib/glusterfs/python/syncdaemon/gsyncd.py", line 542, in main_i local.service_loop(*[r for r in [remote] if r]) File "/usr/lib/glusterfs/python/syncdaemon/resource.py", line 1175, in service_loop g2.register() File "/usr/lib/glusterfs/python/syncdaemon/master.py", line 1077, in register workdir, logfile, 9, 5) File "/usr/lib/glusterfs/python/syncdaemon/resource.py", line 614, in changelog_register Changes.cl_register(cl_brick, cl_dir, cl_log, cl_level, retries) File "/usr/lib/glusterfs/python/syncdaemon/libgfchangelog.py", line 23, in cl_register ret = cls._get_api('gf_changelog_register')(brick, path, File "/usr/lib/glusterfs/python/syncdaemon/libgfchangelog.py", line 19, in _get_api return getattr(cls.libgfc, call) File "/usr/lib64/python2.7/ctypes/__init__.py", line 378, in __getattr__ func = self.__getitem__(name) File "/usr/lib64/python2.7/ctypes/__init__.py", line 383, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: python: undefined symbol: gf_changelog_register [2015-05-25 23:59:29.650513] I [syncdutils(/gluster):192:finalize] <top>: exiting. [2015-05-25 23:59:30.613435] I [monitor(monitor):157:monitor] Monitor: worker(/gluster) died in startup phase COMMANDS ************ # gluster volume geo-replication volume1 gluster3.marcobaldo.ch::volume1_slave start Starting geo-replication session between volume1 & gluster3.marcobaldo.ch::volume1_slave has been successful # gluster volume geo-replication volume1 gluster3.marcobaldo.ch::volume1_slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS ------------------------------------------------------------------------------------------------------------------------------------------- fs2 volume1 /gluster gluster3.marcobaldo.ch::volume1_slave Initializing... N/A N/A fs1 volume1 /gluster gluster3.marcobaldo.ch::volume1_slave Initializing... N/A N/A and after a few seconds # gluster volume geo-replication volume1 gluster3.marcobaldo.ch::volume1_slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS ---------------------------------------------------------------------------------------------------------------------------------- fs2 volume1 /gluster gluster3.marcobaldo.ch::volume1_slave faulty N/A N/A fs1 volume1 /gluster gluster3.marcobaldo.ch::volume1_slave faulty N/A N/A VOLUMES ********** Volume Name: volume1 Type: Replicate Volume ID: 0952d1ce-f62c-40b6-809a-4e193db0f1f9 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gluster1.marcobaldo.ch:/gluster Brick2: gluster2.marcobaldo.ch:/gluster Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on nfs.disable: off Volume Name: volume1_slave Type: Distribute Volume ID: b0b161d8-a642-4d41-808e-2bb076989f78 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: gluster3.marcobaldo.ch:/gluster_slave VERSION ********* # glusterd -V glusterfs 3.5.2 built on *bleep* Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. I'm running OpenSuSE 13.2 and I have installed Glusterfs from the standard OpenSuSE repos. Currently I don't have any known problem with "Replicate" volumes. May I ask for your help? I have been googling but I could not find any input. Tnx in advance and have a nice day Marco _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users