some issues about geo-replication and gfapi

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

         I have two questions with glusterfs:

1.When I use libgfapi to write a file to a volume, which is configured as a geo-replication masterVol. It cannot work well, the geo status is faulty, following is the log:

The glusterfs version is 3.6.1

 

[2014-12-08 13:12:11.708616] I [master(/data_xfs/geo2-master):1330:crawl] _GMaster: finished hybrid crawl syncing, stime: (1418010314, 0)

[2014-12-08 13:12:11.709706] I [master(/data_xfs/geo2-master):480:crawlwrap] _GMaster: primary master with volume id d220647c-5730-4cef-a89b-932470c914d2 ...

[2014-12-08 13:12:11.735719] I [master(/data_xfs/geo2-master):491:crawlwrap] _GMaster: crawl interval: 3 seconds

[2014-12-08 13:12:11.811722] I [master(/data_xfs/geo2-master):1182:crawl] _GMaster: slave's time: (1418010314, 0)

[2014-12-08 13:12:11.840826] E [repce(/data_xfs/geo2-master):207:__call__] RepceClient: call 8318:139656072095488:1418015531.84 (entry_ops) failed on peer with OSError

[2014-12-08 13:12:11.840990] E [syncdutils(/data_xfs/geo2-master):270:log_raise_exception] <top>: FAIL:

Traceback (most recent call last):

  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 164, in main

    main_i()

  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 643, in main_i

    local.service_loop(*[r for r in [remote] if r])

  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1344, in service_loop

    g2.crawlwrap()

  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 529, in crawlwrap

    self.crawl(no_stime_update=no_stime_update)

  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1194, in crawl

    self.process(changes)

  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 946, in process

    self.process_change(change, done, retry)

  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 910, in process_change

    self.slave.server.entry_ops(entries)

  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 226, in __call__

    return self.ins(self.meth, *a)

  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 208, in __call__

    raise res

OSError: [Errno 12] Cannot allocate memory

[2014-12-08 13:12:11.842178] I [syncdutils(/data_xfs/geo2-master):214:finalize] <top>: exiting.

[2014-12-08 13:12:11.843421] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF.

[2014-12-08 13:12:11.843682] I [syncdutils(agent):214:finalize] <top>: exiting.

[2014-12-08 13:12:12.103255] I [monitor(monitor):222:monitor] Monitor: worker(/data_xfs/geo2-master) died in startup phase

 

 

 

2.Another question, I cannot configure a geo-replication in 3.6.1 with the method in 3.4.1.

Any response is appreciate.

产品及时服务:请在Wind资讯终端上按F1键或致电客服专线400-820-Wind(9463)
---------------------------------------------------------------
宫晓辉

技术X01

上海万得信息技术股份有限公司(Wind资讯)
Shanghai Wind Information Co., Ltd.

上海市浦东新区福山路33号建工大厦9 200120
9/F Jian Gong Mansion,33 Fushan Road, Pudong New Area,
Shanghai, P.R.C. 200120
Tel: (0086 21)6888 2280*8310
Fax: (0086 21)6888 2281
Email: xhgong@xxxxxxxxxxx
http://www.wind.com.cn

 

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux