Re: clustat problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Celso,

Well, your suggestion might be the solution to the problem, but since I think its a quorum latency problem, would the parameters "cludb -p clumemb%rtp 50" and "cludb -p cluquorumd%rtp 50" help on this this issue?
I was digging into the Cluster Suite documentation and I found these parameters.
Would those help on this issue without changing the heartbeat method?

Also take a look in this Kbase bellow, it has some interesting tunning parameters for Red Hat's Clsuter Suite v3:
http://kbase.redhat.com/faq/FAQ_79_7722.shtm

Regards,
Filipe Miranda


On 9/7/06, Celso K. Webber <celso@xxxxxxxxxxxxxxxx> wrote:
Hi Filipe!

I think your case is a little bit different from Jordi's case, since you
are using Cluster Suite v3 and he is using v4.

From my own experience, under CSv3 I had this kind o problem when using
high latency quorum devices. So I had to change from disk tiebraker to
network tiebraker. I imagine you're using disk tiebraker, aren't you?

Please, would someone please confirm that Filipe's case could be solved
by changing the heartbeat method? It worked for me in the past, but I'm
not pretty sure that this was the actual solution.

Thanks,

Celso.

Filipe Miranda escreveu:
> Hi there,
>
> I'm having the same problem!
> I'm using RHEL3.8 for Itanium and RedHat Cluster Suite U8. The cluster
> is composed of 2 HP 4CPUs servers and we are using an EMC ClarionCX700
> to hold the quorum partitions and data partitions.
> One more thing that I noticed, eventhough the members are shown ative on
> both nodes, any action on the node that shows the active service does
> not get propagated to the other member.
>
> I already checked the configuration of the rawdevices, and I also used
> the shutil utility and it reported no problems with the quorum partitions.
>
> Does anybody have any suggestions?
>
> Thank you,
>
>
> On 9/2/06, *Jordi Prats Català* < jprats@xxxxxxxx
> <mailto:jprats@xxxxxxxx> > wrote:
>
>     Hi,
>     I'm getting different outputs of clustat utility on each node:
>
>     node1:
>     # clustat
>     Member Status: Quorate
>
>       Member Name                              Status
>       ------ ----                              ------
>       node1                                    Online, Local, rgmanager
>       node2                                    Online, rgmanager
>
>       Service Name         Owner (Last)                   State
>       ------- ----         ----- ------                   -----
>       ptoheczas            node2                          started
>       xoqil                node2                          started
>       ymsgh                node1                          started
>       vofcvhas             node2                          started
>
>     node2:
>     # clustat
>     Member Status: Quorate
>
>       Member Name                              Status
>       ------ ----                              ------
>       node1                                    Online, rgmanager
>       node2                                    Online, Local, rgmanager
>
>
>     (disappears service's info)
>
>     Rebooting disapears this problem (displays same info in both nodes) for
>     a few weeks. After that it appears again.
>
>     Do you know what's going on?
>
>     Thanks,
>
>     --
>     ......................................................................
>             __
>            / /          Jordi Prats Català
>     C E / S / C A      Departament de Sistemes
>          /_/            Centre de Supercomputació de Catalunya
>
>     Gran Capità, 2-4 (Edifici Nexus) · 08034 Barcelona
>     T. 93 205 6464 · F.  93 205 6979 · jprats@xxxxxxxx
>     <mailto:jprats@xxxxxxxx>
>     ......................................................................
>
>     --
>     Linux-cluster mailing list
>     Linux-cluster@xxxxxxxxxx <mailto:Linux-cluster@xxxxxxxxxx>
>     https://www.redhat.com/mailman/listinfo/linux-cluster
>     <https://www.redhat.com/mailman/listinfo/linux-cluster>
>
>
>
>
> --
> Esta mensagem foi verificada pelo sistema de antivírus e
> acredita-se estar livre de perigo.
>
>
> ------------------------------------------------------------------------
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
*Celso Kopp Webber*

celso@xxxxxxxxxxxxxxxx <mailto:celso@xxxxxxxxxxxxxxxx>

*Webbertek - Opensource Knowledge*
(41) 8813-1919
(41) 3284-3035


--
Esta mensagem foi verificada pelo sistema de antivírus e
acredita-se estar livre de perigo.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
---
Filipe T Miranda
Red Hat Certified Engineer
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux