Re: DRBD on RHEL6 Cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Glad you got it sorted out. :)

digimer

On 07/08/13 10:50, D C wrote:
That was it.. Yes I built rpms from source, i'll have to rebuild them.
For the time being I copied /usr/share/cluster/drbd* from an older
cluster and now everything is working fine.

Thanks for the response.


Thanks,
Dan


On Wed, Aug 7, 2013 at 10:48 AM, Digimer <lists@xxxxxxxxxx
<mailto:lists@xxxxxxxxxx>> wrote:

    How did you install drbd? Did you build the RPMs? I believe there is
    a switch for supporting rgmanager if you build manually. I usually
    use the ELRepo RPMs, but if you build manually, try this;

    yum install flex gcc make kernel-devel
    wget -c http://oss.linbit.com/drbd/8.__3/drbd-8.3.15.tar.gz
    <http://oss.linbit.com/drbd/8.3/drbd-8.3.15.tar.gz>
    tar -xvzf drbd-8.3.15.tar.gz
    cd drbd-8.3.15
    ./configure \
        --prefix=/usr \
        --localstatedir=/var \
        --sysconfdir=/etc \
        --with-utils \
        --with-km \
        --with-udev \
        --with-pacemaker \
        --with-rgmanager \
        --with-bashcompletion
    make
    make install
    chkconfig --add drbd
    chkconfig drbd off

    digimer

    On 07/08/13 10:23, D C wrote:

        I'm trying to get drbd setup on a new Centos6 cluster.

        everything seems to be ok in my cluster.conf, except whenever I
        add the
        drbd resource, it stops working.  I also noticed i don't see
        anything in
        /usr/shared/cluster/ for drbd.  Am I missing a package maybe?

        ccs_config_validate fails with:
        [root@e-clust-01 cluster]# ccs_config_validate
        Relax-NG validity error : Extra element rm in interleave
        tempfile:20: element rm: Relax-NG validity error : Element cluster
        failed to validate content
        Configuration fails to validate

        and when i use rg_test it always skips over the drbd resource, and
        anything nested inside it.


        cluster.conf:
        <?xml version="1.0"?>
        <cluster config_version="52" name="testclust">
        <cman cluster_id="46117" expected_votes="1" two_node="1"/>
        <clusternodes>
        <clusternode name="e-clust-01.local" nodeid="1">
        <fence>
        <method name="ipmi">
        <device name="ipmi-clust-01"/>
        </method>
        </fence>
        </clusternode>
        <clusternode name="e-clust-02.local" nodeid="2">
        <fence>
        <method name="ipmi">
        <device name="ipmi-clust-02"/>
        </method>
        </fence>
        </clusternode>
        </clusternodes>
        <rm>
        <failoverdomains>
        <failoverdomain name="failapache" nofailback="1" ordered="1"
        restricted="0">
        <failoverdomainnode name="e-clust-01.local" priority="1"/>
        <failoverdomainnode name="e-clust-02.local" priority="1"/>
        </failoverdomain>
        </failoverdomains>
        <resources>
        <script file="/etc/init.d/httpd" name="init-httpd"/>
                 <drbd name="drbd-storage" resource="storage" />
        <fs name="fs-storage" device="/dev/drbd/by-res/__storage/0"
        fstype="ext4"
        mountpoint="/storage" options="noatime" />
        </resources>
        <service autostart="1" name="oddjob">
        <drbd ref="drbd-storage">
        </drbd>
        <fs ref="fs-storage">
        <ip address="192.168.68.50/22 <http://192.168.68.50/22>
        <http://192.168.68.50/22>" monitor_link="1">
        <script ref="init-httpd"/>
        </ip>
        </fs>
        </service>
        </rm>
        <logging debug="on" logfile_priority="debug"
        syslog_priority="debug">
        <logging_daemon debug="on" logfile_priority="debug" name="rgmanager"
        syslog_priority="debug"/>
        <logging_daemon debug="on" logfile_priority="debug" name="corosync"
        syslog_priority="debug"/>
        <logging_daemon debug="on" logfile_priority="debug" name="fenced"
        syslog_priority="debug"/>
        </logging>
        <fencedevices>
        <fencedevice agent="fence_ipmilan" auth="password"
        ipaddr="192.168.4.167" login="root" name="ipmi-clust-01"
        passwd="root"
        privlvl="ADMINISTRATOR"/>
        <fencedevice agent="fence_ipmilan" auth="password"
        ipaddr="192.168.4.168" login="root" name="ipmi-clust-02"
        passwd="root"
        privlvl="ADMINISTRATOR"/>
        </fencedevices>
        </cluster>





        Thanks,
        Dan




    --
    Digimer
    Papers and Projects: https://alteeve.ca/w/
    What if the cure for cancer is trapped in the mind of a person
    without access to education?




--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without access to education?

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster




[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux