Re: cluster failed to start

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Else you could have googled ...

And maybe have found http://permalink.gmane.org/gmane.linux.highavailability.user/14360 ... which might have been helpful ...


Kind regards,

    Heiko

Am 13.09.2012 11:40, schrieb Ben .T.George:
HI

yes these machine have ILOM, but there is no ILOM fencing listed under LUCI page.

without fencing i can do testing noe.?

plese negligent fencing part and give me instructions.because in my production, we are using cisco UCS and the cisco UCS fencing is listed on LUCI page

regards,
Ben

On Thu, Sep 13, 2012 at 12:28 PM, Heiko Nardmann <heiko.nardmann@xxxxxxxxxxxxx> wrote:
Hi!

Those machines should have some ALOM/ILOM/whatever mechanism which you could use for power fencing e.g. ...

You should check whether a corresponding fence agent exists then.


Kind regards,

    Heiko

Am 13.09.2012 10:11, schrieb Ben .T.George:
HI 

thanks for your reply..But on this test setup how can i configure fencing..i am running 2 Sun X4200 machine.and the freenas is used to create Iscsi.

my current NFS setup is working perfectly..and in the production setup , i need to implement this on cisco UCS. i saw cisco UCS is there under fencing options

please help me add three more nfs shares to my existing configuration..


regards,
ben

On Thu, Sep 13, 2012 at 9:53 AM, digimer <lists@xxxxxxxxxx> wrote:
Please add fencing. Without it, the first time a node fails, your cluster will hang (by design). Most servers have IPMI (or similar), so you can probably use fence_ipmilan or one of the brand-specific agents like fence_ilo for HP's iLO.


On 09/13/2012 02:44 AM, Ben .T.George wrote:
Hi

i manually created a cluster.conf file and copied to my 2 nodes..now
it's working fine with one one NFS HA sare.i need to add three more shares..

please check this http://pastebin.com/eM08vrC5  this is my cluster.conf

how can i add three more shares to this cluster.conf file.?

please help..i got stacked with project.after testing this setup i need
to implement on production


Regards,
Ben



On Wed, Sep 12, 2012 at 10:35 PM, Jan Pokorný <jpokorny@xxxxxxxxxx
<mailto:jpokorny@xxxxxxxxxx>> wrote:

    Hello Ben,

    On 12/09/12 16:39 +0300, Ben .T.George wrote:
     > i created 2 node cluster with RHEL6 by using redhat cluster suite.
     >
     > i joined cluster nodes by using LUCI
     >
     > i created one IP as resource and source as that IP i created.
     >
     > i started cluster , but on luci status showing like disabled.but
    that IP is
     > pining and it is added on node2
     >
     > ip addr is showing that ip.
     >
     > #clustat  is showing both nodes online.

    actually thanks for bringing up what showed up to be a real issue [1].
    Could you please try "service modclusterd start" across the nodes
    (and perhaps
    making the service persistent with chkconfig) to see if it helps you?

    In the mean time, this should be a workaround in such cases;
    more decent solution for this bug is underway.

    [1] https://bugzilla.redhat.com/show_bug.cgi?id=856785

    Thanks,
    Jan

    --
    Linux-cluster mailing list
    Linux-cluster@xxxxxxxxxx <mailto:Linux-cluster@xxxxxxxxxx> *#!/usr/bin/env python
#Mysignature.py :)*


Signature = " " " Ben.T.George \n
                   Linux System Administrator \n
                   Diyar United Company \n
                   kuwait \n
                   Phone : +965 - 50629829 \n " ""

Print Signature





-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux