RE: Two nodes don't see each other

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



JF,

I used this one for the new panadol and it works fine : 

<?xml version="1.0"?>
<cluster config_version="6" name="LNX-CLU">
        <fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="panadola" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="panadola-ilo"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="panadolb" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="panadolb-ilo"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices>
                <fencedevice agent="fence_ilo" hostname="panadola-mp" login="jimi" name="panadola-ilo" passwd="jimi_only"/>
                <fencedevice agent="fence_ilo" hostname="panadolb-mp" login="jimi" name="panadolb-ilo" passwd="jimi_only"/>
        </fencedevices>
        <rm>
                <failoverdomains/>
                <resources>
                </resources>
          
        </rm>
</cluster>

My cluter-start scripts and modules :

service ccsd start
service cman start
service fenced start
service clvmd start
service gfs start
service rgmanager start


[root@panadola samba]# lsmod
Module                  Size  Used by
lock_dlm               46196  2
gfs                   321548  2
lock_harness            6960  2 lock_dlm,gfs
nfsd                  267105  9
exportfs                7745  1 nfsd
lockd                  77809  2 nfsd
parport_pc             29185  0
lp                     15089  0
parport                43981  2 parport_pc,lp
autofs4                23241  0
i2c_dev                13633  0
i2c_core               28481  1 i2c_dev
dlm                   130180  9 lock_dlm
cman                  136480  19 lock_dlm,dlm
md5                     5697  1
ipv6                  282657  28
sunrpc                170425  12 nfsd,lockd
button                  9057  0
battery                11209  0
ac                      6729  0
uhci_hcd               34665  0
ehci_hcd               33349  0
hw_random               7137  0
tg3                    91717  0
e1000                 110381  0
floppy                 65809  0
dm_snapshot            18561  0
dm_zero                 3649  0
dm_mirror              28889  0
ext3                  137681  3
jbd                    68849  1 ext3
dm_mod                 66433  9 dm_snapshot,dm_zero,dm_mirror
cciss                  59017  5
sd_mod                 19392  0
scsi_mod              140177  2 cciss,sd_mod




> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx 
> [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of 
> Larvoire, Jean-Francois
> Sent: 18 January 2006 18:56
> To: linux-cluster@xxxxxxxxxx
> Subject:  Two nodes don't see each other
> 
> Hello,
> 
> I'm experimenting with a 2-nodes cluster, based on RHEL4.
> The goal is to manage a service failover, I don't need GFS for now.
> I think the following packages are sufficient:
> 
> ccs           Manage the cluster configuration file
> cman          Symmetric Cluster Manager
> cman-kernel   Back end behind cman
> dlm           Distributed Lock Manager
> dlm-kernel    Back end behind dlm
> fence         Power control
> magma         cluster-abstraction library
> magma-plugins Maybe we need the sm plugin?
> 
> Is this correct?
> 
> Anyway I've built and installed these 1.01 packages on the two nodes.
> But when I start the cluster, both nodes seem to think 
> they're the only node in town.
> Any clue welcome!
> 
> 
> # Startup procedure I used on both nodes:
> depmod -a
> modprobe cman
> modprobe dlm
> ccsd
> cman_tool join
> 
> # What node 1 sees:
> [root@eiros3 ~]# cman_tool status
> Protocol version: 5.0.1
> Config version: 2
> Cluster name: eiros
> Cluster ID: 3249
> Cluster Member: Yes
> Membership state: Cluster-Member
> Nodes: 1
> Expected_votes: 1
> Total_votes: 1
> Quorum: 1
> Active subsystems: 0
> Node name: eiros3
> Node addresses: 192.168.8.73
> [root@eiros3 ~]# cman_tool nodes
> Node  Votes Exp Sts  Name
>    1    1    1   M   eiros3
> 
> # What node 2 sees:
> [root@eiros4 ~]# cman_tool status
> Protocol version: 5.0.1
> Config version: 2
> Cluster name: eiros
> Cluster ID: 3249
> Cluster Member: Yes
> Membership state: Cluster-Member
> Nodes: 1
> Expected_votes: 1
> Total_votes: 1
> Quorum: 1
> Active subsystems: 0
> Node name: eiros4
> Node addresses: 192.168.8.74
> [root@eiros4 ~]# cman_tool nodes
> Node  Votes Exp Sts  Name
>    2    1    1   M   eiros4
> 
> # Here's the cluster.conf file:
> <?xml version="1.0"?>
> <cluster name="eiros" config_version="2">
>   <fencedevices>
>     <fencedevice name="eiros3-ilo" agent="fence_ilo"
>       hostname="eiros3-mp" login="login" passwd="passwd"/>
>     <fencedevice name="eiros4-ilo" agent="fence_ilo"
>       hostname="eiros4-mp" login="login" passwd="passwd"/>
>     <fencedevice name="last_resort" agent="fence_manual"/>
>   </fencedevices>
> 
>   <clusternodes>
>     <clusternode name="eiros3" nodeid="1" votes="1">
>       <fence>
>         <!-- "power" method is tried before all others -->
>         <method name="power">
>           <device name="eiros3-ilo"/>
>         </method>
>         <method name="human">
>           <device name="last_resort" ipaddr="eiros3"/>
>         </method>
>       </fence>
>     </clusternode>
> 
>     <clusternode name="eiros4" nodeid="2" votes="1">
>       <fence>
>         <!-- "power" method is tried before all others -->
>         <method name="power">
>           <device name="eiros4-ilo"/>
>         </method>
>         <method name="human">
>           <device name="last_resort" ipaddr="eiros4"/>
>         </method>
>       </fence>
>     </clusternode>
>   </clusternodes>
> 
>   <!-- 2-nodes clusters need this special quorum adjustment -->
>   <cman port="6809"
>     two_node="1" expected_votes="1">
>   </cman>
> </cluster>
> 
> ==============================================================
> ==========
> = Jean-François Larvoire                 =========   _/      
> ===========
> = Hewlett-Packard                        =======    _/        
>    =======
> = 5 Avenue Raymond Chanas, Eybens        =====     _/_/_/  
> _/_/_/  =====
> = 38053 Grenoble Cedex 9, FRANCE         =====    _/  _/  _/  
> _/   =====
> = Phone: +33 476 14 13 38                =====   _/  _/  
> _/_/_/    =====
> = Fax:   +33 476 14 45 19                =======        _/    
>    =======
> = Email: jean-francois.larvoire@xxxxxx   ==========    _/     
> ==========
> ==============================================================
> ==========
> 
> --
> 
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux