Re: Redhat without qdisk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

But if you have problem with your storage it's normale the node goes
fenced, because your cluster services depends on storage

Well, no, my clustered services do not depend on SAN storage.

Or maybe you wold like to have a cluster running without san disk

I need the qdisk for two reasons:

- heuristics
- to safely achieve quorum in a two-node-cluster if only one node is up.

regards, Gunther

Il giorno 13 aprile 2012 11:20, Gunther Schlegel <schlegel@xxxxxxxxx
<mailto:schlegel@xxxxxxxxx>> ha scritto:

    Hi Lon,

    > > Why redhat made the qdisk as Tie-breakers and some people from
    support
    > > say it's one optional or some time says is not needed?
    >
    >  It is optional and is often not needed.  It was developed really
    for two purposes:
    >
    >  - to help resolve fencing races (which can be resolved using
    delays or other tactics)
    >
    >  - to allow 'last-man-standing' in >2-node clusters.
    >
    >  With qdiskd you can go from 4 to 1 node (given properly configured
    heuristics).  The other 3 nodes then, because heuristics fail, can't
    "gang up" (by forming a quorum) on the surviving node and take over
    - this means your critical service stays running and available.  The
    problem is that, in practice, the "last node" is rarely able to
    handle the workload.
    >
    >  This behavior is obviated by features in corosync 2.0, which gives
    administrators the ability to state that a -new- quorum can only
    form if all members are present (but joining an existing quorum is
    always allowed).


    Is this in RHEL6? I am still trying to solve the following situation:

    - 2 node cluster without need for shared storage (no gfs)
    - qdiskd in place because of the heuristics.
    - Cluster is fine if both nodes have network communication and
    heuristics reach the minimum score.

    Problem: if the shared storage the qdisk resides on becomes
    unavailable (but everything else is fine) a node will be fenced. It
    actually happens at the time the shared storage comes back online,
    the node re-establishing the storage link first wins and fences the
    other one. I try to mitigate that with loooong timeout settings, but
    therefore a necessary cluster switch eviction is also delayed.

    I would really appreciate if the qdiskd would withdraw it's quorum
    vote and not do any fencing at all. The cluster would survive as
    quorum is also gathered if the cluster network connection is
    established.

    best regards, Gunther


    Gunther Schlegel
    Head of IT Infrastructure


    --


    .............................................................
    Riege Software International GmbH  Phone: +49 2159 91480
    <tel:%2B49%202159%2091480>
    Mollsfeld 10                       Fax: +49 2159 914811
    <tel:%2B49%202159%20914811>
    40670 Meerbusch                    Web: www.riege.com
    <http://www.riege.com>
    Germany                            E-Mail: schlegel@xxxxxxxxx
    <mailto:schlegel@xxxxxxxxx>
    --                                 --
    Commercial Register:               Managing Directors:
    Amtsgericht Neuss HRB-NR 4207      Christian Riege
    VAT Reg No.: DE120585842           Gabriele  Riege
                                       Johannes  Riege
                                       Tobias    Riege
    .............................................................
               YOU CARE FOR FREIGHT, WE CARE FOR YOU




    --
    Linux-cluster mailing list
    Linux-cluster@xxxxxxxxxx <mailto:Linux-cluster@xxxxxxxxxx>
    https://www.redhat.com/mailman/listinfo/linux-cluster




--
esta es mi vida e me la vivo hasta que dios quiera


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Gunther Schlegel
Head of IT Infrastructure




.............................................................
Riege Software International GmbH  Phone: +49 2159 91480
Mollsfeld 10                       Fax: +49 2159 914811
40670 Meerbusch                    Web: www.riege.com
Germany                            E-Mail: schlegel@xxxxxxxxx
--                                 --
Commercial Register:               Managing Directors:
Amtsgericht Neuss HRB-NR 4207      Christian Riege
VAT Reg No.: DE120585842           Gabriele  Riege
                                  Johannes  Riege
                                  Tobias    Riege
.............................................................
YOU CARE FOR FREIGHT, WE CARE FOR YOU


begin:vcard
fn:Gunther Schlegel
n:Schlegel;Gunther
org:Riege Software International GmbH;IT Infrastructure
adr:;;Mollsfeld 10;Meerbusch;;40670;Germany
email;internet:schlegel@xxxxxxxxx
title:Head of IT Infrastructure
tel;work:+49-2159-9148-0
tel;fax:+49-2159-9148-11
x-mozilla-html:FALSE
url:http://riege.com
version:2.1
end:vcard

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux