Hi ,
After restarting the service. service entered in to the fail state.
[root@master1 ~]# /etc/init.d/glusterd restart
Stopping glusterd: [FAILED]
Starting glusterd: [FAILED]
Note: This behavior only happening over rdma network. But with ethernet there is no issue.
Thank you
Atul Yadav
On Tue, Jul 5, 2016 at 11:28 AM, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote:
On Tue, Jul 5, 2016 at 11:01 AM, Atul Yadav <atulyadavtech@xxxxxxxxx> wrote:Hi All,The glusterfs environment details are given below:-[root@master1 ~]# cat /etc/redhat-releaseCentOS release 6.7 (Final)[root@master1 ~]# uname -r2.6.32-642.1.1.el6.x86_64[root@master1 ~]# rpm -qa | grep -i glusterglusterfs-rdma-3.8rc2-1.el6.x86_64glusterfs-api-3.8rc2-1.el6.x86_64glusterfs-3.8rc2-1.el6.x86_64glusterfs-cli-3.8rc2-1.el6.x86_64glusterfs-client-xlators-3.8rc2-1.el6.x86_64glusterfs-server-3.8rc2-1.el6.x86_64glusterfs-fuse-3.8rc2-1.el6.x86_64glusterfs-libs-3.8rc2-1.el6.x86_64[root@master1 ~]#Volume Name: homeType: ReplicateVolume ID: 2403ddf9-c2e0-4930-bc94-734772ef099fStatus: StoppedNumber of Bricks: 1 x 2 = 2Transport-type: rdmaBricks:Brick1: master1-ib.dbt.au:/glusterfs/home/brick1Brick2: master2-ib.dbt.au:/glusterfs/home/brick2Options Reconfigured:network.ping-timeout: 20nfs.disable: onperformance.readdir-ahead: ontransport.address-family: inetconfig.transport: rdmacluster.server-quorum-type: servercluster.quorum-type: fixedcluster.quorum-count: 1locks.mandatory-locking: offcluster.enable-shared-storage: disablecluster.server-quorum-ratio: 51%When my single master node is up only, but other nodes are still showing connected mode ....gluster pool listUUID Hostname State89ccd72e-cb99-4b52-a2c0-388c99e5c7b3 master2-ib.dbt.au Connectedd2c47fc2-f673-4790-b368-d214a58c59f4 compute01-ib.dbt.au Connecteda5608d66-a3c6-450e-a239-108668083ff2 localhost Connected[root@master1 ~]#Please advise usIs this normal behavior Or This is issue.First of, we don't have any master slave configuration mode for gluster trusted storage pool i.e. peer list. Secondly, if master2 and compute01 are still reflecting as 'connected' even though they are down it means that localhost here didn't receive disconnect events for some reason. Could you restart glusterd service on this node and check the output of gluster pool list again?
Thank YouAtul Yadav
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users