Re: 答复: geo-replication status partial faulty

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

There seems to be some issue in glusterfs01.sh3.ctripcorp.com slave node.
Can you share the complete logs ?

You can increase verbosity of debug messages like this:
gluster volume geo-replication <master volume> <slave host>::<slave volume> config log-level DEBUG


Also, check  /root/.ssh/authorized_keys in glusterfs01.sh3.ctripcorp.com
It should have entries in /var/lib/glusterd/geo-replication/common_secret.pem.pub (present in master node).

Have a look at this one for example:
https://www.gluster.org/pipermail/gluster-users/2015-August/023174.html

Thanks,
Saravana

On 05/19/2016 07:53 AM, vyyy杨雨阳 wrote:

Hello,

 

I have tried to config a geo-replication volume , all the master nodes configuration are the same, When I start this volume, the status shows partial faulty as following:

 

gluster volume geo-replication filews glusterfs01.sh3.ctripcorp.com::filews_slave status

 

MASTER NODE      MASTER VOL    MASTER BRICK          SLAVE                                          STATUS     CHECKPOINT STATUS    CRAWL STATUS       

-------------------------------------------------------------------------------------------------------------------------------------------------

SVR8048HW2285    filews        /export/sdb/filews    glusterfs01.sh3.ctripcorp.com::filews_slave    faulty     N/A                  N/A                

SVR8050HW2285    filews        /export/sdb/filews    glusterfs03.sh3.ctripcorp.com::filews_slave    Passive    N/A                  N/A                

SVR8047HW2285    filews        /export/sdb/filews    glusterfs01.sh3.ctripcorp.com::filews_slave    Active     N/A                  Hybrid Crawl       

SVR8049HW2285    filews        /export/sdb/filews    glusterfs05.sh3.ctripcorp.com::filews_slave    Active     N/A                  Hybrid Crawl       

SH02SVR5951      filews        /export/sdb/brick1    glusterfs06.sh3.ctripcorp.com::filews_slave    Passive    N/A                  N/A                

SH02SVR5953      filews        /export/sdb/brick1    glusterfs01.sh3.ctripcorp.com::filews_slave    faulty     N/A                  N/A                

SVR6995HW2285    filews        /export/sdb/filews    glusterfs01.sh3.ctripcorp.com::filews_slave    faulty     N/A                  N/A                

SH02SVR5954      filews        /export/sdb/brick1    glusterfs01.sh3.ctripcorp.com::filews_slave    faulty     N/A                  N/A                

SVR6994HW2285    filews        /export/sdb/filews    glusterfs02.sh3.ctripcorp.com::filews_slave    Passive    N/A                  N/A                

SVR6993HW2285    filews        /export/sdb/filews    glusterfs01.sh3.ctripcorp.com::filews_slave    faulty     N/A                  N/A                

SH02SVR5952      filews        /export/sdb/brick1    glusterfs01.sh3.ctripcorp.com::filews_slave    faulty     N/A                  N/A                

SVR6996HW2285    filews        /export/sdb/filews    glusterfs04.sh3.ctripcorp.com::filews_slave    Passive    N/A                  N/A   

 

On the faulty node, log file /var/log/glusterfs/geo-replication/filews shows worker(/export/sdb/filews) died before establishing connection

 

[2016-05-18 16:55:46.402622] I [monitor(monitor):215:monitor] Monitor: ------------------------------------------------------------

[2016-05-18 16:55:46.402930] I [monitor(monitor):216:monitor] Monitor: starting gsyncd worker

[2016-05-18 16:55:46.517460] I [changelogagent(agent):72:__init__] ChangelogAgent: Agent listining...

[2016-05-18 16:55:46.518066] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF.

[2016-05-18 16:55:46.518279] I [syncdutils(agent):214:finalize] <top>: exiting.

[2016-05-18 16:55:46.518194] I [monitor(monitor):267:monitor] Monitor: worker(/export/sdb/filews) died before establishing connection

[2016-05-18 16:55:56.697036] I [monitor(monitor):215:monitor] Monitor: ------------------------------------------------------------

 

Any advice and suggestions will be greatly appreciated.

 

 

 

 

 

Best Regards

������ Yuyang Yang

 



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux