Hi again,
New changes in behaviour, both master master nodes that are active toggles to failure and the logs repeat the same over and over again.
Part of log, node1:
[2018-08-13 18:24:44.701711] I [gsyncdstatus(worker /urd-gds/gluster):276:set_active] GeorepStatus: Worker Status Change status=Active
[2018-08-13 18:24:44.704360] I [gsyncdstatus(worker /urd-gds/gluster):248:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=History Crawl
[2018-08-13 18:24:44.705162] I [master(worker /urd-gds/gluster):1448:crawl] _GMaster: starting history crawl turns=1 stime=(1523907056, 0) entry_stime=None etime=1534184684
[2018-08-13 18:24:45.717072] I [master(worker /urd-gds/gluster):1477:crawl] _GMaster: slave's time stime=(1523907056, 0)
[2018-08-13 18:24:45.904958] E [repce(worker /urd-gds/gluster):197:__call__] RepceClient: call failed call=5919:140339726538560:1534184685.88 method=entry_ops error=GsyncdError
[2018-08-13 18:24:45.905111] E [syncdutils(worker /urd-gds/gluster):298:log_raise_exception] <top>: execution of "gluster" failed with ENOENT (No such file or directory)
[2018-08-13 18:24:45.919265] I [repce(agent /urd-gds/gluster):80:service_loop] RepceServer: terminating on reaching EOF.
[2018-08-13 18:24:46.553194] I [monitor(monitor):272:monitor] Monitor: worker died in startup phase brick=/urd-gds/gluster
[2018-08-13 18:24:46.561784] I [gsyncdstatus(monitor):243:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-08-13 18:24:56.581748] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/urd-gds/gluster slave_node=urd-gds-geo-000
[2018-08-13 18:24:56.655164] I [gsyncd(worker /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf
[2018-08-13 18:24:56.655193] I [gsyncd(agent /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf
[2018-08-13 18:24:56.655889] I [changelogagent(agent /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining...
[2018-08-13 18:24:56.664628] I [resource(worker /urd-gds/gluster):1348:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-08-13 18:24:58.347415] I [resource(worker /urd-gds/gluster):1395:connect_remote] SSH: SSH connection between master and slave established. duration=1.6824
[2018-08-13 18:24:58.348151] I [resource(worker /urd-gds/gluster):1067:connect] GLUSTER: Mounting gluster volume locally...
[2018-08-13 18:24:59.463598] I [resource(worker /urd-gds/gluster):1090:connect] GLUSTER: Mounted gluster volume duration=1.1150
[2018-08-13 18:24:59.464184] I [subcmds(worker /urd-gds/gluster):70:subcmd_worker] <top>: Worker spawn successful. Acknowledging back to monitor
[2018-08-13 18:25:01.549007] I [master(worker /urd-gds/gluster):1534:register] _GMaster: Working dir path=/var/lib/misc/gluster/gsyncd/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/urd-gds-gluster
[2018-08-13 18:25:01.549606] I [resource(worker /urd-gds/gluster):1253:service_loop] GLUSTER: Register time time=1534184701
[2018-08-13 18:25:01.593946] I [gsyncdstatus(worker /urd-gds/gluster):276:set_active] GeorepStatus: Worker Status Change status=Active
Part of log, node2:
[2018-08-13 18:25:14.554233] I [gsyncdstatus(monitor):243:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-08-13 18:25:24.568727] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/urd-gds/gluster slave_node=urd-gds-geo-000
[2018-08-13 18:25:24.609642] I [gsyncd(agent /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf
[2018-08-13 18:25:24.609678] I [gsyncd(worker /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf
[2018-08-13 18:25:24.610362] I [changelogagent(agent /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining...
[2018-08-13 18:25:24.621551] I [resource(worker /urd-gds/gluster):1348:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-08-13 18:25:26.164855] I [resource(worker /urd-gds/gluster):1395:connect_remote] SSH: SSH connection between master and slave established. duration=1.5431
[2018-08-13 18:25:26.165124] I [resource(worker /urd-gds/gluster):1067:connect] GLUSTER: Mounting gluster volume locally...
[2018-08-13 18:25:27.331969] I [resource(worker /urd-gds/gluster):1090:connect] GLUSTER: Mounted gluster volume duration=1.1667
[2018-08-13 18:25:27.335560] I [subcmds(worker /urd-gds/gluster):70:subcmd_worker] <top>: Worker spawn successful. Acknowledging back to monitor
[2018-08-13 18:25:37.768867] I [master(worker /urd-gds/gluster):1534:register] _GMaster: Working dir path=/var/lib/misc/gluster/gsyncd/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/urd-gds-gluster
[2018-08-13 18:25:37.769479] I [resource(worker /urd-gds/gluster):1253:service_loop] GLUSTER: Register time time=1534184737
[2018-08-13 18:25:37.787317] I [gsyncdstatus(worker /urd-gds/gluster):276:set_active] GeorepStatus: Worker Status Change status=Active
[2018-08-13 18:25:37.789822] I [gsyncdstatus(worker /urd-gds/gluster):248:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=History Crawl
[2018-08-13 18:25:37.790008] I [master(worker /urd-gds/gluster):1448:crawl] _GMaster: starting history crawl turns=1 stime=(1525290650, 0) entry_stime=(1525296245, 0) etime=1534184737
[2018-08-13 18:25:37.791222] I [master(worker /urd-gds/gluster):1477:crawl] _GMaster: slave's time stime=(1525290650, 0)
[2018-08-13 18:25:38.63499] I [master(worker /urd-gds/gluster):1301:process] _GMaster: Skipping already processed entry ops to_changelog=1525290651 num_changelogs=1 from_changelog=1525290651
[2018-08-13 18:25:38.63621] I [master(worker /urd-gds/gluster):1315:process] _GMaster: Entry Time Taken MKD=0 MKN=0 LIN=0 SYM=0 REN=0 RMD=0 CRE=0 duration=0.0000 UNL=0
[2018-08-13 18:25:38.63678] I [master(worker /urd-gds/gluster):1325:process] _GMaster: Data/Metadata Time Taken SETA=1 SETX=0 meta_duration=0.0228 data_duration=0.2456 DATA="" XATT=0
[2018-08-13 18:25:38.63822] I [master(worker /urd-gds/gluster):1335:process] _GMaster: Batch Completed changelog_end=1525290651 entry_stime=(1525296245, 0) changelog_start=1525290651 stime=(152\
5290650, 0) duration=0.2723 num_changelogs=1 mode=history_changelog
[2018-08-13 18:25:38.73400] I [master(worker /urd-gds/gluster):1477:crawl] _GMaster: slave's time stime=(1525290650, 0)
[2018-08-13 18:25:38.480941] I [master(worker /urd-gds/gluster):1885:syncjob] Syncer: Sync Time Taken duration=0.1327 num_files=3 job=3 return_code=23
[2018-08-13 18:25:39.963423] I [master(worker /urd-gds/gluster):1885:syncjob] Syncer: Sync Time Taken duration=0.1133 num_files=8 job=1 return_code=23
[2018-08-13 18:25:39.980724] I [master(worker /urd-gds/gluster):1885:syncjob] Syncer: Sync Time Taken duration=0.6315 num_files=47 job=2 return_code=23
...............
[2018-08-13 18:26:04.534953] I [master(worker /urd-gds/gluster):1885:syncjob] Syncer: Sync Time Taken duration=0.0988 num_files=18 job=2 return_code=23
[2018-08-13 18:26:07.798583] I [master(worker /urd-gds/gluster):1885:syncjob] Syncer: Sync Time Taken duration=0.2600 num_files=27 job=2 return_code=23
[2018-08-13 18:26:08.708100] I [master(worker /urd-gds/gluster):1885:syncjob] Syncer: Sync Time Taken duration=0.4090 num_files=67 job=2 return_code=23
[2018-08-13 18:26:14.865883] E [repce(worker /urd-gds/gluster):197:__call__] RepceClient: call failed call=18662:140079998809920:1534184774.58 method=entry_ops error=GsyncdError
[2018-08-13 18:26:14.866166] E [syncdutils(worker /urd-gds/gluster):298:log_raise_exception] <top>: execution of "gluster" failed with ENOENT (No such file or directory)
[2018-08-13 18:26:14.991022] I [repce(agent /urd-gds/gluster):80:service_loop] RepceServer: terminating on reaching EOF.
[2018-08-13 18:26:15.384844] I [monitor(monitor):272:monitor] Monitor: worker died in startup phase brick=/urd-gds/gluster
[2018-08-13 18:26:15.397360] I [gsyncdstatus(monitor):243:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
Help would be appriciated!
Thanks!
Regards
Marcus Pedersén
Från: gluster-users-bounces@xxxxxxxxxxx <gluster-users-bounces@xxxxxxxxxxx> för Marcus Pedersén <marcus.pedersen@xxxxxx>
Skickat: den 12 augusti 2018 22:18
Till: khiremat@xxxxxxxxxx
Kopia: gluster-users@xxxxxxxxxxx
Ämne: Re: Geo-replication stops after 4-5 hours
Hi,
As the geo-replication stopped after 4-5 hours, I added a cron job that stopped, paused for 2 mins and started geo-replication again every 6 hours.
The cron job has been running for 5 days and the changelogs has been catching up.
Now a different behavior has shown up.
In one of the active master nodes I get a python error.
The other active master node has started to toggle status between active and faulty.
See parts of logs below.
When I read Troubleshooting Geo-replication, there is a suggestion when sync is not complete, to enforce a full sync of the data by erasing the index and restarting GlusterFS geo-replication.
There is no explanation of how to erase the index.
Should I enforse a full sync?
How do I erase the index?
Thanks a lot!
Best regards
Marcus Pedersén
Node with python error:
[2018-08-12 16:02:05.304924] I [resource(worker /urd-gds/gluster):1348:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-08-12 16:02:06.842832] I [resource(worker /urd-gds/gluster):1395:connect_remote] SSH: SSH connection between master and slave established. duration=1.5376
[2018-08-12 16:02:06.843370] I [resource(worker /urd-gds/gluster):1067:connect] GLUSTER: Mounting gluster volume locally...
[2018-08-12 16:02:07.930706] I [resource(worker /urd-gds/gluster):1090:connect] GLUSTER: Mounted gluster volume duration=1.0869
[2018-08-12 16:02:07.931536] I [subcmds(worker /urd-gds/gluster):70:subcmd_worker] <top>: Worker spawn successful. Acknowledging back to monitor
[2018-08-12 16:02:20.759797] I [master(worker /urd-gds/gluster):1534:register] _GMaster: Working dir path=/var/lib/misc/gluster/gsyncd/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/urd-gds-gluster
[2018-08-12 16:02:20.760411] I [resource(worker /urd-gds/gluster):1253:service_loop] GLUSTER: Register time time=1534089740
[2018-08-12 16:02:20.831918] I [gsyncdstatus(worker /urd-gds/gluster):276:set_active] GeorepStatus: Worker Status Change status=Active
[2018-08-12 16:02:20.835541] I [gsyncdstatus(worker /urd-gds/gluster):248:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=History Crawl
[2018-08-12 16:02:20.836832] I [master(worker /urd-gds/gluster):1448:crawl] _GMaster: starting history crawl turns=1 stime=(1523906126, 0) entry_stime=None etime=1534089740
[2018-08-12 16:02:21.848570] I [master(worker /urd-gds/gluster):1477:crawl] _GMaster: slave's time stime=(1523906126, 0)
[2018-08-12 16:02:21.950453] E [syncdutils(worker /urd-gds/gluster):330:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 360, in twrap
tf(*aargs)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1880, in syncjob
po = self.sync_engine(pb, self.log_err)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1413, in rsync
rconf.ssh_ctl_args + \
AttributeError: 'NoneType' object has no attribute 'split'
[2018-08-12 16:02:21.975228] I [repce(agent /urd-gds/gluster):80:service_loop] RepceServer: terminating on reaching EOF.
[2018-08-12 16:02:22.947170] I [monitor(monitor):272:monitor] Monitor: worker died in startup phase brick=/urd-gds/gluster
[2018-08-12 16:02:22.954096] I [gsyncdstatus(monitor):243:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-08-12 16:02:32.973948] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/urd-gds/gluster slave_node=urd-gds-geo-000
[2018-08-12 16:02:33.16155] I [gsyncd(agent /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf
[2018-08-12 16:02:33.16882] I [changelogagent(agent /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining...
[2018-08-12 16:02:33.17292] I [gsyncd(worker /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf
[2018-08-12 16:02:33.26951] I [resource(worker /urd-gds/gluster):1348:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-08-12 16:02:34.642838] I [resource(worker /urd-gds/gluster):1395:connect_remote] SSH: SSH connection between master and slave established. duration=1.6156
[2018-08-12 16:02:34.643369] I [resource(worker /urd-gds/gluster):1067:connect] GLUSTER: Mounting gluster volume locally...
Node that toggles status between active and faulty:
[2018-08-12 19:33:03.475833] I [master(worker /urd-gds/gluster):1885:syncjob] Syncer: Sync Time Taken duration=0.2757 num_files=27 job=2 return_code=23
[2018-08-12 19:33:04.818854] I [master(worker /urd-gds/gluster):1885:syncjob] Syncer: Sync Time Taken duration=0.3767 num_files=67 job=1 return_code=23
[2018-08-12 19:33:09.926820] E [repce(worker /urd-gds/gluster):197:__call__] RepceClient: call failed call=14853:139697829693248:1534102389.64 method=entry_ops error=GsyncdError
[2018-08-12 19:33:09.927042] E [syncdutils(worker /urd-gds/gluster):298:log_raise_exception] <top>: execution of "gluster" failed with ENOENT (No such file or directory)
[2018-08-12 19:33:09.942267] I [repce(agent /urd-gds/gluster):80:service_loop] RepceServer: terminating on reaching EOF.
[2018-08-12 19:33:10.349848] I [monitor(monitor):272:monitor] Monitor: worker died in startup phase brick=/urd-gds/gluster
[2018-08-12 19:33:10.363173] I [gsyncdstatus(monitor):243:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-08-12 19:33:20.386089] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/urd-gds/gluster slave_node=urd-gds-geo-000
[2018-08-12 19:33:20.456687] I [gsyncd(agent /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf
[2018-08-12 19:33:20.456686] I [gsyncd(worker /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf
[2018-08-12 19:33:20.457559] I [changelogagent(agent /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining...
[2018-08-12 19:33:20.511825] I [resource(worker /urd-gds/gluster):1348:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-08-12 19:33:22.88713] I [resource(worker /urd-gds/gluster):1395:connect_remote] SSH: SSH connection between master and slave established. duration=1.5766
[2018-08-12 19:33:22.89272] I [resource(worker /urd-gds/gluster):1067:connect] GLUSTER: Mounting gluster volume locally...
[2018-08-12 19:33:23.179249] I [resource(worker /urd-gds/gluster):1090:connect] GLUSTER: Mounted gluster volume duration=1.0896
[2018-08-12 19:33:23.179805] I [subcmds(worker /urd-gds/gluster):70:subcmd_worker] <top>: Worker spawn successful. Acknowledging back to monitor
[2018-08-12 19:33:35.245277] I [master(worker /urd-gds/gluster):1534:register] _GMaster: Working dir path=/var/lib/misc/gluster/gsyncd/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/urd-gds-gluster
[2018-08-12 19:33:35.246495] I [resource(worker /urd-gds/gluster):1253:service_loop] GLUSTER: Register time time=1534102415
[2018-08-12 19:33:35.321988] I [gsyncdstatus(worker /urd-gds/gluster):276:set_active] GeorepStatus: Worker Status Change status=Active
[2018-08-12 19:33:35.324270] I [gsyncdstatus(worker /urd-gds/gluster):248:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=History Crawl
[2018-08-12 19:33:35.324902] I [master(worker /urd-gds/gluster):1448:crawl] _GMaster: starting history crawl turns=1 stime=(1525290650, 0) entry_stime=(1525296245, 0) etime=1534102415
[2018-08-12 19:33:35.328735] I [master(worker /urd-gds/gluster):1477:crawl] _GMaster: slave's time stime=(1525290650, 0)
[2018-08-12 19:33:35.574338] I [master(worker /urd-gds/gluster):1301:process] _GMaster: Skipping already processed entry ops to_changelog=1525290651 num_changelogs=1 from_changelog=1525290651
[2018-08-12 19:33:35.574448] I [master(worker /urd-gds/gluster):1315:process] _GMaster: Entry Time Taken MKD=0 MKN=0 LIN=0 SYM=0 REN=0 RMD=0 CRE=0 duration=0.0000 UNL=0
[2018-08-12 19:33:35.574507] I [master(worker /urd-gds/gluster):1325:process] _GMaster: Data/Metadata Time Taken SETA=1 SETX=0 meta_duration=0.0249 data_duration=0.2156 DATA="" XATT=0
[2018-08-12 19:33:35.574723] I [master(worker /urd-gds/gluster):1335:process] _GMaster: Batch Completed changelog_end=1525290651 entry_stime=(1525296245, 0) changelog_start=1525290651 stime=(152\
5290650, 0) duration=0.2455 num_changelogs=1 mode=history_changelog
[2018-08-12 19:33:35.582545] I [master(worker /urd-gds/gluster):1477:crawl] _GMaster: slave's time stime=(1525290650, 0)
[2018-08-12 19:33:35.780823] I [master(worker /urd-gds/gluster):1885:syncjob] Syncer: Sync Time Taken duration=0.0847 num_files=3 job=2 return_code=23
[2018-08-12 19:33:37.362822] I [master(worker /urd-gds/gluster):1885:syncjob] Syncer: Sync Time Taken duration=0.0807 num_files=4 job=2 return_code=23
[2018-08-12 19:33:37.818542] I [master(worker /urd-gds/gluster):1885:syncjob] Syncer: Sync Time Taken duration=0.1098 num_files=11 job=1 return_code=23
Från: gluster-users-bounces@xxxxxxxxxxx <gluster-users-bounces@xxxxxxxxxxx> för Marcus Pedersén <marcus.pedersen@xxxxxx>
Skickat: den 6 augusti 2018 13:28
Till: khiremat@xxxxxxxxxx
Kopia: gluster-users@xxxxxxxxxxx
Ämne: Re: Geo-replication stops after 4-5 hours
Hi,
Is there a way to resolve the problem with rsync and hanging processes?
Do I need to kill all the processes and hope that it starts again or stop/start geo-replication?
If I stop/start geo-replication it will start again, I have tried it before.
Regards
Marcus
Från: gluster-users-bounces@xxxxxxxxxxx <gluster-users-bounces@xxxxxxxxxxx> för Marcus Pedersén <marcus.pedersen@xxxxxx>
Skickat: den 2 augusti 2018 10:04
Till: Kotresh Hiremath Ravishankar
Kopia: gluster-users@xxxxxxxxxxx
Ämne: Re: Geo-replication stops after 4-5 hours
Hi Kotresh,
I get the following and then it hangs:
strace: Process 5921 attached write(2, "rsync: link_stat \"/tmp/gsyncd-au"..., 12811
When sync is running I can see rsync with geouser on the slave node.
Regards
Marcus
################
Marcus Pedersén
Systemadministrator
Interbull Centre
################
Sent from my phone
################
---
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
här
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
here
---
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
här
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
here
---
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
här
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
here
---
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
här
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
here
|