Re: CMAN Failed to start on Secondary Node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



configure the fencing devices.

2016-03-05 10:47 GMT+01:00 Shreekant Jena <shreekant.jena@xxxxxxxxx>:
> secondary node
>
> --------------------------------------
> [root@Node2 ~]# cat /etc/cluster/cluster.conf
> <?xml version="1.0"?>
> <cluster alias="IVRS_DB" config_version="166" name="IVRS_DB">
>         <fence_daemon clean_start="0" post_fail_delay="0"
> post_join_delay="3"/>
>         <clusternodes>
>                 <clusternode name="Node1" nodeid="1" votes="1">
>                         <fence/>
>                 </clusternode>
>                 <clusternode name="Node2" nodeid="2" votes="1">
>                         <fence/>
>                 </clusternode>
>         </clusternodes>
>         <cman expected_votes="1" two_node="1"/>
>         <fencedevices/>
>         <rm>
>                 <failoverdomains>
>                         <failoverdomain name="Package1" ordered="1"
> restricted="1">
>                                 <failoverdomainnode name="Node1"
> priority="1"/>
>                                 <failoverdomainnode name="Node2"
> priority="1"/>
>                         </failoverdomain>
>                 </failoverdomains>
>                 <resources>
>                         <ip address="10.199.214.64" monitor_link="1"/>
>                 </resources>
>                 <service autostart="1" domain="PE51SPM1" exclusive="1"
> name="PE51SPM1">
>                         <fs device="/dev/EI51SPM_DATA/SPIM_admin"
> force_fsck="1" force_unmount="1" fsid="3446" fstype="ext3"
> mountpoint="/SPIM/admin" name="admin" options="" self_fence="1"/>
>                         <fs device="/dev/EI51SPM_DATA/flatfile_upload"
> force_fsck="1" force_unmount="1" fsid="17646" fstype="ext3"
> mountpoint="/flatfile_upload" name="flatfile_upload" options=""
> self_fence="1"/>
>                         <fs device="/dev/EI51SPM_DATA/oracle" force_fsck="1"
> force_unmount="1" fsid="64480" fstype="ext3" mountpoint="/oracle"
> name="oracle" options="" self_fence="1"/>
>                         <fs device="/dev/EI51SPM_DATA/SPIM_datafile_01"
> force_fsck="1" force_unmount="1" fsid="60560" fstype="ext3"
> mountpoint="/SPIM/datafile_01" name="datafile_01" options=""
> self_fence="1"/>
>                         <fs device="/dev/EI51SPM_DATA/SPIM_datafile_02"
> force_fsck="1" force_unmount="1" fsid="48426" fstype="ext3"
> mountpoint="/SPIM/datafile_02" name="datafile_02" options=""
> self_fence="1"/>
>                         <fs device="/dev/EI51SPM_DATA/SPIM_redolog_01"
> force_fsck="1" force_unmount="1" fsid="54326" fstype="ext3"
> mountpoint="/SPIM/redolog_01" name="redolog_01" options="" self_fence="1"/>
>                         <fs device="/dev/EI51SPM_DATA/SPIM_redolog_02"
> force_fsck="1" force_unmount="1" fsid="23041" fstype="ext3"
> mountpoint="/SPIM/redolog_02" name="redolog_02" options="" self_fence="1"/>
>                         <fs device="/dev/EI51SPM_DATA/SPIM_redolog_03"
> force_fsck="1" force_unmount="1" fsid="46362" fstype="ext3"
> mountpoint="/SPIM/redolog_03" name="redolog_03" options="" self_fence="1"/>
>                         <fs device="/dev/EI51SPM_DATA/SPIM_archives_01"
> force_fsck="1" force_unmount="1" fsid="58431" fstype="ext3"
> mountpoint="/SPIM/archives_01" name="archives_01" options=""
> self_fence="1"/>
>                         <script file="/etc/cluster/dbstart" name="dbstart"/>
>                         <ip ref="10.199.214.64"/>
>                 </service>
>         </rm>
> </cluster>
>
>
> [root@Node2 ~]# clustat
> msg_open: Invalid argument
> Member Status: Inquorate
>
> Resource Group Manager not running; no service information available.
>
> Membership information not available
>
>
>
> Primary Node
>
> -----------------------------------------
> [root@Node1 ~]# clustat
> Member Status: Quorate
>
>   Member Name                              Status
>   ------ ----                              ------
>   Node1                 Online, Local, rgmanager
>   Node2                 Offline
>
>   Service Name         Owner (Last)                   State
>   ------- ----         ----- ------                   -----
>   Package1             Node1     started
>
>
> On Sat, Mar 5, 2016 at 12:17 PM, Digimer <lists@xxxxxxxxxx> wrote:
>>
>> Please share your cluster.conf (only obfuscate passwords please) and the
>> output of 'clustat' from each node.
>>
>> digimer
>>
>> On 05/03/16 01:46 AM, Shreekant Jena wrote:
>> > Dear All,
>> >
>> > I have a 2 node cluster but after reboot secondary node is showing
>> > offline . And cman failed to start .
>> >
>> > Please find below logs on secondary node:-
>> >
>> > root@EI51SPM1 cluster]# clustat
>> > msg_open: Invalid argument
>> > Member Status: Inquorate
>> >
>> > Resource Group Manager not running; no service information available.
>> >
>> > Membership information not available
>> > [root@EI51SPM1 cluster]# tail -10 /var/log/messages
>> > Feb 24 13:36:23 EI51SPM1 ccsd[25487]: Error while processing connect:
>> > Connection refused
>> > Feb 24 13:36:23 EI51SPM1 kernel: CMAN: sending membership request
>> > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Cluster is not quorate.  Refusing
>> > connection.
>> > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Error while processing connect:
>> > Connection refused
>> > Feb 24 13:36:28 EI51SPM1 kernel: CMAN: sending membership request
>> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate.  Refusing
>> > connection.
>> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
>> > Connection refused
>> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate.  Refusing
>> > connection.
>> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
>> > Connection refused
>> > Feb 24 13:36:33 EI51SPM1 kernel: CMAN: sending membership request
>> > [root@EI51SPM1 cluster]#
>> > [root@EI51SPM1 cluster]# cman_tool status
>> > Protocol version: 5.0.1
>> > Config version: 166
>> > Cluster name: IVRS_DB
>> > Cluster ID: 9982
>> > Cluster Member: No
>> > Membership state: Joining
>> > [root@EI51SPM1 cluster]# cman_tool nodes
>> > Node  Votes Exp Sts  Name
>> > [root@EI51SPM1 cluster]#
>> > [root@EI51SPM1 cluster]#
>> >
>> >
>> > Thanks & regards
>> > SHREEKANTA JENA
>> >
>> >
>> >
>>
>>
>> --
>> Digimer
>> Papers and Projects: https://alteeve.ca/w/
>> What if the cure for cancer is trapped in the mind of a person without
>> access to education?
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster@xxxxxxxxxx
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster



-- 
  .~.
  /V\
 //  \\
/(   )\
^`~'^

-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux