Re: How to run same service in parallel in RedHat Cluster 5.0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, that *looks* fine. So when you start the cman and rgmanager, what
does 'clustat' show?

Also, *setup fencing*. Without fencing configured, weird things will
happen. Once you have fencing configured and tested, paste the updated
cluster.conf and the output of clustat.

On 09/28/2011 09:57 AM, Ruben Sajnovetzky wrote:
> 
> I copied the full cluster.conf, I deleted everything else to
> “concentrate” in the issue.
> Now I re-created everything from scratch and with only FS service. I’m
> copying here the files and
> Output you requested.
> 
> Situation is still the same.
> 
> cluster.conf file:
> 
> <?xml version="1.0"?>
> <cluster alias="PPM Toronto" config_version="30" name="PPM Toronto">
>         <fence_daemon clean_start="0" post_fail_delay="0"
> post_join_delay="3"/>
>         <clusternodes>
>                 <clusternode name="server-87111" nodeid="1" votes="1">
>                         <fence/>
>                 </clusternode>
>                 <clusternode name="server-87112" nodeid="2" votes="1">
>                         <fence/>
>                 </clusternode>
>         </clusternodes>
>         <cman expected_votes="1" two_node="0">
>                 <multicast addr="224.4.5.6"/>
>         </cman>
>         <fencedevices/>
>         <rm>
>                 <failoverdomains>
>                         <failoverdomain name="PPM GW Failover"
> nofailback="1" ordered="0" restricted="1">
>                                 <failoverdomainnode name="server-87111"
> priority="1"/>
>                         </failoverdomain>
>                         <failoverdomain name="PPM Units Failover"
> nofailback="1" ordered="0" restricted="1">
>                                 <failoverdomainnode name="server-87112"
> priority="1"/>
>                         </failoverdomain>
>                 </failoverdomains>
>                 <resources>
>                         <fs device="/dev/VolGroup00/optvol"
> force_fsck="1" force_unmount="0" fsid="36845" fstype="ext3"
> mountpoint="/opt" name="PPM_OPT_FS" self_fence="0"/>
>                 </resources>
>                 <service autostart="0" domain="PPM GW Failover"
> exclusive="0" name="PPM Gateway">
>                         <fs ref="PPM_OPT_FS"/>
>                 </service>
>                 <service autostart="0" domain="PPM Units Failover"
> exclusive="0" name="PPM Units">
>                         <fs ref="PPM_OPT_FS"/>
>                 </service>
>         </rm>
> </cluster>
> 
> 
> ------------------------------------------
> 
> /etc/fstab
> 
> /dev/VolGroup00/LogVol00 /                       ext3    defaults        1 1
> LABEL=/boot             /boot                   ext3    defaults        1 2
> tmpfs                   /dev/shm                tmpfs   defaults        0 0
> devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
> sysfs                   /sys                    sysfs   defaults        0 0
> proc                    /proc                   proc    defaults        0 0
> /dev/VolGroup00/LogVol01 swap                    swap    defaults        0 0
> /dev/VolGroup00/homevol /home                   ext3    defaults        1 1
> #####/dev/VolGroup00/optvol  /opt                    ext3    defaults
>        1 1
> 
> 
> 
> ------------------------------------------
> 
> [root@server-87112 cluster]# pvscan
>   PV /dev/sda2   VG VolGroup00   lvm2 [255.88 GB / 17.09 GB free]
>   Total: 1 [255.88 GB] / in use: 1 [255.88 GB] / in no VG: 0 [0   ]
> [root@server-87112 cluster]# vgscan
>   Reading all physical volumes.  This may take a while...
>   Found volume group "VolGroup00" using metadata type lvm2
> [root@ server-87112 cluster]# lvscan
>   ACTIVE            '/dev/VolGroup00/LogVol00' [11.00 GB] inherit
>   ACTIVE            '/dev/VolGroup00/LogVol01' [7.78 GB] inherit
>   ACTIVE            '/dev/VolGroup00/homevol' [100.00 GB] inherit
>   ACTIVE            '/dev/VolGroup00/optvol' [120.00 GB] inherit
> 
> [root@server-87111 cluster]# pvscan
>   PV /dev/sda2   VG VolGroup00   lvm2 [255.88 GB / 17.09 GB free]
>   Total: 1 [255.88 GB] / in use: 1 [255.88 GB] / in no VG: 0 [0   ]
> [root@server-87111 cluster]# vgscan
>   Reading all physical volumes.  This may take a while...
>   Found volume group "VolGroup00" using metadata type lvm2
> [root@server-87111 cluster]# lvscan
>   ACTIVE            '/dev/VolGroup00/LogVol00' [11.00 GB] inherit
>   ACTIVE            '/dev/VolGroup00/LogVol01' [7.78 GB] inherit
>   ACTIVE            '/dev/VolGroup00/homevol' [100.00 GB] inherit
>   ACTIVE            '/dev/VolGroup00/optvol' [120.00 GB] inherit
> 
> 
> 
> On 28-Sep-2011 12:44 PM, "Digimer" <linux@xxxxxxxxxxx> wrote:
> 
>     On 09/28/2011 06:09 AM, Ruben Sajnovetzky wrote:
>     > This approach didn’t work either :(
>     > First server started service the second couldn’t start
> 
>     You only shared a small snippet of your cluster.conf config, and none of
>     the other requested info. I don't know what might be missing versus
>     omitted.
> 
>     --
>     Digimer
>     E-Mail:              digimer@xxxxxxxxxxx
>     Freenode handle:     digimer
>     Papers and Projects: http://alteeve.com
>     Node Assassin:       http://nodeassassin.org
>     "At what point did we forget that the Space Shuttle was, essentially,
>     a program that strapped human beings to an explosion and tried to stab
>     through the sky with fire and math?"
> 


-- 
Digimer
E-Mail:              digimer@xxxxxxxxxxx
Freenode handle:     digimer
Papers and Projects: http://alteeve.com
Node Assassin:       http://nodeassassin.org
"At what point did we forget that the Space Shuttle was, essentially,
a program that strapped human beings to an explosion and tried to stab
through the sky with fire and math?"

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux