Géo-rep fail

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/16/11 17:06, anthony garnier wrote:
> Hi,
> I'm currently trying to use g?o-rep on the local data-node into a
> directory but it fails with status "faulty"
[...]
> I've done this cmd :
> # gluster volume geo-replication athena /soft/venus config
>
> # gluster volume geo-replication athena /soft/venus start
>
> # gluster volume geo-replication athena /soft/venus status
> MASTER SLAVE STATUS
> --------------------------------------------------------------------------------
> athena /soft/venus faulty
>
>
> Here is the log file in Debug mod :
>
> [2011-05-16 13:28:55.268006] I [monitor(monitor):42:monitor] Monitor:
> ------------------------------------------------------------
> [2011-05-16 13:28:55.268281] I [monitor(monitor):43:monitor] Monitor:
> starting gsyncd worker
[...]
> [2011-05-16 13:28:59.547034] I [master:191:crawl] GMaster: primary
> master with volume id 28521f8f-49d3-4e2a-b984-f664f44f5289 ...
> [2011-05-16 13:28:59.547180] D [master:199:crawl] GMaster: entering .
> [2011-05-16 13:28:59.548289] D [repce:131:push] RepceClient: call
> 10888:47702589471600:1305545339.55 xtime('.',
> '28521f8f-49d3-4e2a-b984-f664f44f5289') ...
> [2011-05-16 13:28:59.596978] E [syncdutils:131:log_raise_exception]
> <top>: FAIL:
> Traceback (most recent call last):
> File "/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> line 152, in twrap
> tf(*aa)
> File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line
> 118, in listen
> rid, exc, res = recv(self.inf)
> File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 42,
> in recv
> return pickle.load(inf)
> EOFError
>
>
> Does anyone already got those errors ?

This means slave gsyncd instance could not properly start up. To debug 
this further, we need to see the slave side logs. In your case, the 
following commands will set a debug log level for the slave (takes 
effect if done before starting the geo-replication session) and locate
its log file:

# gluster volume geo-replication /soft/venus config log-level DEBUG
# gluster volume geo-replication /soft/venus config log-file

The output of the latter will contain an unresolved parameter
${session-owner}. To get its actual value, run

# gluster volume geo-replication athena /soft/venus config session-owner

-- please post the content of the actual log file, path to which you get 
after the substitution. (Also, cf.

http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Configuring_Geo-replication

, slave-side logs are illustrated there.)

Csaba




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux