Re: Question about cluster behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



cluster log


2014-02-14 18:07 GMT+01:00 FABIO FERRARI <fabio.ferrari@xxxxxxxxxx>:
So it's not a normal behavior, I guess.

Here is my cluster.conf:

<?xml version="1.0"?>
<cluster config_version="59" name="mail">
        <clusternodes>
                <clusternode name="eta.mngt.unimo.it" nodeid="1">
                        <fence>
                                <method name="fence-eta">
                                        <device name="fence-eta"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="beta.mngt.unimo.it" nodeid="2">
                        <fence>
                                <method name="fence-beta">
                                        <device name="fence-beta"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="guerro.mngt.unimo.it" nodeid="3">
                        <fence>
                                <method name="fence-guerro">
                                        <device name="fence-guerro"
port="Guerro
" ssl="on" uuid="4213f370-9572-63c7-26e4-22f0f43843aa"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="5"/>
        <quorumd label="mail-qdisk"/>
        <rm>
                <resources>
                        <ip address="155.185.44.61/24" sleeptime="10"/>
                        <mysql config_file="/etc/my.cnf"
listen_address="155.185.44.61" name="mysql"
shutdown_wait="10" startup_wait="10"/>
                        <script file="/etc/init.d/httpd" name="httpd"/>
                        <script file="/etc/init.d/postfix" name="postfix"/>
                        <script file="/etc/init.d/dovecot" name="dovecot"/>
                        <fs device="/dev/mapper/mailvg-maillv"
force_fsck="1" force_unmount="1" fsid="58161"
fstype="xfs" mountpoint="/cl" name="mailvg-maill
v" options="defaults,noauto" self_fence="1"/>
                        <lvm lv_name="maillv" name="lvm-mailvg-maillv"
self_fence="1" vg_name="mailvg"/>
                </resources>
                <failoverdomains>
                        <failoverdomain name="mailfailoverdomain"
nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode
name="eta.mngt.unimo.it" priority="1"/>
                                <failoverdomainnode
name="beta.mngt.unimo.it" priority="2"/>
                                <failoverdomainnode
name="guerro.mngt.unimo.it" priority="3"/>
                        </failoverdomain>
                </failoverdomains>
                <service domain="mailfailoverdomain" max_restarts="3"
name="mailservices" recovery="restart"
restart_expire_time="600">
                        <fs ref="mailvg-maillv">
                                <ip ref="155.185.44.61/24">
                                        <mysql ref="mysql">
                                                <script ref="httpd"/>
                                                <script ref="postfix"/>
                                                <script ref="dovecot"/>
                                        </mysql>
                                </ip>
                        </fs>
                </service>
        </rm>
        <fencedevices>
                <fencedevice agent="fence_ipmilan" auth="password"
ipaddr="155.185.135.105" lanplus="on" login="root"
name="fence-eta" passwd="******" pr
ivlvl="ADMINISTRATOR"/>
                <fencedevice agent="fence_ipmilan" auth="password"
ipaddr="155.185.135.106" lanplus="on" login="root"
name="fence-beta" passwd="******" p
rivlvl="ADMINISTRATOR"/>
                <fencedevice agent="fence_vmware_soap"
ipaddr="155.185.0.10" login="etabetaguerro"
name="fence-guerro" passwd="******"/>
        </fencedevices>
</cluster>

What log file do you need? There are many in /var/log/cluster..

Fabio Ferrari

> :( no cluster.conf && no log, so if you want someone try to help you, you
> need to give more information, no just describe the problem
>
>
> 2014-02-14 10:54 GMT+01:00 FABIO FERRARI <fabio.ferrari@xxxxxxxxxx>:
>
>> Hello,
>>
>> I have a 3 nodes cluster in high availability with a quorum disk. It is
>> Redhat 6.
>> Occasionally it happens that we have to shut down the entire cluster
>> system.
>> When I restart the machines, the cluster don't see any cluster partition
>> (/dev/mapper/vg-lv) until all machines are started.
>> If I want to start only 2 machines, I have to manually remove the other
>> machine frome the web interface and restart the other two machines. If I
>> don't do this, the cluster partition path isn't seen and the services
>> never start. Is this normal or there is some configuration problem in my
>> cluster?
>>
>> thanks in advance for the answer
>>
>> Fabio Ferrari
>>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster@xxxxxxxxxx
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
>
> --
> esta es mi vida e me la vivo hasta que dios quiera
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera
-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux