Re: - Geo-Replication sync process - data difference

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Aravinda,

I am using this version of glusterfs:

glusterfs 3.4.2 built on Jan 14 2014 18:05:35
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>

the status of the geo replication is OK:

NODE     MASTER     SLAVE                                  STATUS
------------------------------------------------------------------
storage  storage root@xxxxxxxxxxxxxxx:/data/data-cluster     OK

So there are no issues with. Can it be that the changelog is big due to the fact that he took a lot of time to do the 1st sync, so now he need to run through all the changelogs?

In the log directory, I have two log files, in the 1st one, I have those kind of logs:

[2015-08-18 04:39:57.331947] I [glusterfsd.c:1068:reincarnate] 0-glusterfsd: Fetching the volume file from server... [2015-08-18 04:39:57.332500] I [glusterfsd-mgmt.c:1584:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2015-08-18 04:39:57.343306] I [glusterfsd.c:1068:reincarnate] 0-glusterfsd: Fetching the volume file from server... [2015-08-18 04:39:57.343651] I [glusterfsd-mgmt.c:1584:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing


In the second one, I was just geting some logs when trying to check the file:

[2015-08-18 11:09:03.116003] E [syncdutils:174:log_raise_exception] <top>: timestamp corruption for ./dms/molecules/molecules-19-15411
[2015-08-18 11:09:03.122987] I [syncdutils:148:finalize] <top>: exiting.
[2015-08-18 11:09:03.924904] I [monitor(monitor):21:set_state] Monitor: new state: faulty [2015-08-18 11:09:13.952065] I [monitor(monitor):80:monitor] Monitor: ------------------------------------------------------------ [2015-08-18 11:09:13.952385] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker [2015-08-18 11:09:14.14427] I [gsyncd:404:main_i] <top>: syncing: gluster://localhost:storage -> ssh://root@xxxxxxxxxxxxxxx:/data/data-cluster [2015-08-18 11:09:20.49017] I [master:60:gmaster_builder] <top>: setting up master for normal sync mode [2015-08-18 11:09:21.303029] I [master:679:crawl] _GMaster: new master is 00568bee-a211-4d0f-9ad4-918849f824d3 [2015-08-18 11:09:21.303330] I [master:683:crawl] _GMaster: primary master with volume id 00568bee-a211-4d0f-9ad4-918849f824d3 ... [2015-08-18 11:10:14.106652] I [monitor(monitor):21:set_state] Monitor: new state: OK

It seems he has changed the status to faulty and was able to recover, but what does this error means: "timestamp corruption for ..."


-- Kindest regards,

Milos Cuculovic
IT Manager

--
MDPI AG
Postfach, CH-4005 Basel, Switzerland
Office: Klybeckstrasse 64, CH-4057 Basel, Switzerland
Tel. +41 61 683 77 35
Fax +41 61 302 89 18
Email: cuculovic@xxxxxxxx
Skype: milos.cuculovic.mdpi

On 18.08.2015 10:48, Aravinda wrote:
Hi,

Please let us know the version of Gluster you are using.

Geo-replication uses rsync to sync the files, but detects the list of
changes using Changelog.

Is Geo-rep status command showing Faulty? if yes you may find errors in
the log file (/var/log/glusterfs/geo-replication/)

regards
Aravinda

On 08/18/2015 02:08 PM, Milos Cuculovic - MDPI wrote:
Hi All,

I am using geo-replication on 4.4 TB of storage data.
It took some time to do the initial sync, but since a week this was
finished, the problem is that I still have around 140 GB of data
difference.

Any ide why? How is the sync process called? Does this use rsync?

Thank you.

-- Kindest regards,

Milos Cuculovic
IT Manager



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux