Gluster Geo Replication ChangelogException Is a directory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello - I am having a problem with geo-replication on glusterv5 that I
hope someone can help me with.

I have a 7-server distribute cluster as the primary volume, and a 2
server distribute cluster as the secondary volume. Both are running
the same version of gluster on CentOS 7: glusterfs-5.3-2.el7.x86_64

I was able to setup the replication keys, user, groups, etc and
establish the session, but it goes faulty quickly after initializing.

I ran into the missing libgfchangelog.so error and fixed with a symlink:

[root@pcic-backup01 ~]# ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so
[root@pcic-backup01 ~]# ls -lh /usr/lib64/libgfchangelog.so*
lrwxrwxrwx. 1 root root  30 May 16 13:16 /usr/lib64/libgfchangelog.so
-> /usr/lib64/libgfchangelog.so.0
lrwxrwxrwx. 1 root root  23 May 16 08:58
/usr/lib64/libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 62K Feb 25 04:02 /usr/lib64/libgfchangelog.so.0.0.1


But right now, when trying to start replication it goes faulty:

[root@gluster01 ~]# gluster volume geo-replication storage
geoaccount@10.0.231.81::pcic-backup start
Starting geo-replication session between storage &
geoaccount@10.0.231.81::pcic-backup has been successful
[root@gluster01 ~]# gluster volume geo-replication status

MASTER NODE    MASTER VOL    MASTER BRICK                  SLAVE USER
  SLAVE                                        SLAVE NODE    STATUS
         CRAWL STATUS    LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
10.0.231.50    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A
Initializing...    N/A             N/A
10.0.231.54    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A
Initializing...    N/A             N/A
10.0.231.56    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A
Initializing...    N/A             N/A
10.0.231.52    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A
Initializing...    N/A             N/A
10.0.231.55    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A
Initializing...    N/A             N/A
10.0.231.51    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A
Initializing...    N/A             N/A
10.0.231.53    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A
Initializing...    N/A             N/A
[root@gluster01 ~]# gluster volume geo-replication status

MASTER NODE    MASTER VOL    MASTER BRICK                  SLAVE USER
  SLAVE                                        SLAVE NODE    STATUS
CRAWL STATUS    LAST_SYNCED
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
10.0.231.50    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty
N/A             N/A
10.0.231.54    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty
N/A             N/A
10.0.231.56    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty
N/A             N/A
10.0.231.55    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty
N/A             N/A
10.0.231.53    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty
N/A             N/A
10.0.231.51    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty
N/A             N/A
10.0.231.52    storage       /mnt/raid6-storage/storage    geoaccount
  ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty
N/A             N/A
[root@gluster01 ~]# gluster volume geo-replication storage
geoaccount@10.0.231.81::pcic-backup stop
Stopping geo-replication session between storage &
geoaccount@10.0.231.81::pcic-backup has been successful


And the /var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.log
log file contains the error: GLUSTER: Changelog register failed
error=[Errno 21] Is a directory

[root@gluster01 ~]# cat
/var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.log
[2019-05-23 17:07:23.500781] I [gsyncd(config-get):308:main] <top>:
Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:23.629298] I [gsyncd(status):308:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:31.354005] I [gsyncd(config-get):308:main] <top>:
Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:31.483582] I [gsyncd(config-get):308:main] <top>:
Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:31.863888] I [gsyncd(config-get):308:main] <top>:
Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:31.994895] I [gsyncd(monitor):308:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:33.133888] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
Status Change status=Initializing...
[2019-05-23 17:07:33.134301] I [monitor(monitor):157:monitor] Monitor:
starting gsyncd worker brick=/mnt/raid6-storage/storage
slave_node=10.0.231.81
[2019-05-23 17:07:33.214462] I [gsyncd(agent
/mnt/raid6-storage/storage):308:main] <top>: Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:33.216737] I [changelogagent(agent
/mnt/raid6-storage/storage):72:__init__] ChangelogAgent: Agent
listining...
[2019-05-23 17:07:33.228072] I [gsyncd(worker
/mnt/raid6-storage/storage):308:main] <top>: Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:33.247236] I [resource(worker
/mnt/raid6-storage/storage):1366:connect_remote] SSH: Initializing SSH
connection between master and slave...
[2019-05-23 17:07:34.948796] I [gsyncd(config-get):308:main] <top>:
Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:35.73339] I [gsyncd(status):308:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:35.232405] I [resource(worker
/mnt/raid6-storage/storage):1413:connect_remote] SSH: SSH connection
between master and slave established. duration=1.9849
[2019-05-23 17:07:35.232748] I [resource(worker
/mnt/raid6-storage/storage):1085:connect] GLUSTER: Mounting gluster
volume locally...
[2019-05-23 17:07:36.359250] I [resource(worker
/mnt/raid6-storage/storage):1108:connect] GLUSTER: Mounted gluster
volume duration=1.1262
[2019-05-23 17:07:36.359639] I [subcmds(worker
/mnt/raid6-storage/storage):80:subcmd_worker] <top>: Worker spawn
successful. Acknowledging back to monitor
[2019-05-23 17:07:36.380975] E [repce(agent
/mnt/raid6-storage/storage):122:worker] <top>: call failed:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in worker
    res = getattr(self.obj, rmeth)(*in_data[2:])
  File "/usr/libexec/glusterfs/python/syncdaemon/changelogagent.py",
line 40, in register
    return Changes.cl_register(cl_brick, cl_dir, cl_log, cl_level, retries)
  File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py",
line 45, in cl_register
    cls.raise_changelog_err()
  File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py",
line 29, in raise_changelog_err
    raise ChangelogException(errn, os.strerror(errn))
ChangelogException: [Errno 21] Is a directory
[2019-05-23 17:07:36.382556] E [repce(worker
/mnt/raid6-storage/storage):214:__call__] RepceClient: call failed
call=27412:140659114579776:1558631256.38 method=register
error=ChangelogException
[2019-05-23 17:07:36.382833] E [resource(worker
/mnt/raid6-storage/storage):1266:service_loop] GLUSTER: Changelog
register failed error=[Errno 21] Is a directory
[2019-05-23 17:07:36.404313] I [repce(agent
/mnt/raid6-storage/storage):97:service_loop] RepceServer: terminating
on reaching EOF.
[2019-05-23 17:07:37.361396] I [monitor(monitor):278:monitor] Monitor:
worker died in startup phase brick=/mnt/raid6-storage/storage
[2019-05-23 17:07:37.370690] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
Status Change status=Faulty
[2019-05-23 17:07:41.526408] I [gsyncd(config-get):308:main] <top>:
Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:41.643923] I [gsyncd(status):308:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:45.722193] I [gsyncd(config-get):308:main] <top>:
Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:45.817210] I [gsyncd(config-get):308:main] <top>:
Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:46.188499] I [gsyncd(config-get):308:main] <top>:
Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:46.258817] I [gsyncd(config-get):308:main] <top>:
Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:47.350276] I [gsyncd(monitor-status):308:main]
<top>: Using session config file
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
[2019-05-23 17:07:47.364751] I
[subcmds(monitor-status):29:subcmd_monitor_status] <top>: Monitor
Status Change status=Stopped


I'm not really sure where to go from here...

[root@gluster01 ~]# gluster volume geo-replication storage
geoaccount@10.0.231.81::pcic-backup config  | grep -i changelog
change_detector:changelog
changelog_archive_format:%Y%m
changelog_batch_size:727040
changelog_log_file:/var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/changes-${local_id}.log
changelog_log_level:INFO

Thanks,
 -Matthew
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux