RE: GFS as a Resource

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here is my clustat....

Cluster Status for Xen @ Mon Aug 18 13:28:37 2008
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 xen1.smartechcorp.net                        1 Online, Local, rgmanager
 xen2.smartechcorp.net                        2 Online, rgmanager

 Service Name                    Owner (Last)                    State

 ------- ----                    ----- ------                    -----

 service:GFS Mount Xen1          xen1.smartechcorp.net           started

 service:GFS Mount Xen2          xen2.smartechcorp.net           started


Here is my cluster.conf...

<?xml version="1.0"?>
<cluster alias="Xen" config_version="53" name="Xen">
        <fence_daemon clean_start="0" post_fail_delay="0"
post_join_delay="-1"/>
        <clusternodes>
                <clusternode name="xen1.smartechcorp.net" nodeid="1"
votes="1">
                        <fence>
                                <method name="1">
                                        <device name="manual"
nodename="xen1.smartechcorp.net"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen2.smartechcorp.net" nodeid="2"
votes="1">
                        <fence>
                                <method name="1">
                                        <device name="manual"
nodename="xen2.smartechcorp.net"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices>
                <fencedevice agent="fence_manual" name="manual"/>
        </fencedevices>
        <rm>
                <failoverdomains>
                        <failoverdomain name="bias-xen1" nofailback="0"
ordered="1" restricted="0">

                                <failoverdomainnode
name="xen1.smartechcorp.net" priority="1"/>

                                <failoverdomainnode
name="xen2.smartechcorp.net" priority="2"/>
                        </failoverdomain>
                        <failoverdomain name="bias-xen2" nofailback="0"
ordered="1" restricted="0">
                                <failoverdomainnode
name="xen1.smartechcorp.net" priority="2"/>
                                <failoverdomainnode
name="xen2.smartechcorp.net" priority="1"/>
                        </failoverdomain>
                        <failoverdomain name="gfs-xen1" nofailback="0"
ordered="0" restricted="1">
                                <failoverdomainnode
name="xen1.smartechcorp.net" priority="1"/>
                        </failoverdomain>
                        <failoverdomain name="gfs-xen2" nofailback="0"
ordered="0" restricted="1">
                                <failoverdomainnode
name="xen2.smartechcorp.net" priority="1"/>
                        </failoverdomain>
                </failoverdomains>
                <resources/>
                <service autostart="1" domain="gfs-xen2" exclusive="0"
name="GFS Mount Xen2" recovery="restart"/>
                <service autostart="1" domain="gfs-xen1" exclusive="0"
name="GFS Mount Xen1" recovery="restart"/>
        </rm>
        <quorumd device="/dev/sdb5" interval="1" min_score="1" tko="10"
votes="1">
                <heuristic interval="2" program="ping -c3 -t2 10.10.10.1"
score="1"/>
        </quorumd>
</cluster>

Without a entry in fstab my gfs file systems never mount.   So I am
wondering how I can leave out entries in my fstab.

---

Chris Edwards
Smartech Corp.
Div. of AirNet Group
http://www.airnetgroup.com
http://www.smartechcorp.net
cedwards@xxxxxxxxxxxxxxxx
P:  423-664-7678 x114
C:  423-593-6964
F:  423-664-7680


-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Maurizio Rottin
Sent: Friday, August 15, 2008 2:06 PM
To: linux clustering
Subject: Re:  GFS as a Resource

2008/8/15 Chris Edwards <cedwards@xxxxxxxxxxxxxxxx>:
> Whoops, scratch that last post.   I now have it working by leaving the
entry
> in fstab without the noauto and turning GFS off with chkconfig and
allowing
> the cluster service to turn it on.
> Thanks again!

i believe thats the wrong way.
I know it works in that way, but:
- if you have only one node, do not use gfs, it's slow!
- if you have more than one node, use it -- and if you can, test gfs2
as weel (it should be more and more fast) -- but do not mount it (only
- i mean, you don't need it to be listed on a fstab) in fstab.
gfs works if only all the nodes are "up and running", which means, if
one node can't be reached, but is up (network or other problems
inolved) no one will use the gfs filesystem.
You must use it as a resorce, and you must have at least one fencing
method for each node in the cluster.
In this way, once a node becomes unreachable, it will be fenced and
the other nodes can write happily on the filesystem. This is because
if one node "can be considered up and maybe running" it may be writing
on the filesystem, or it can maybe think that it is the only one node
in the cluster (think ebout switch problem, or arp spoofing) than if
you try a "clust" command on that node you will see al  the other
nodes down and only that one up....this is why you must have  a
fencing method! that node HAS TO be shut down or reloaded, otherwise
the filesystem will be blocked, and no read o write can be issued by
any of the nodes in the cluster".

I am not talking about what it is in theory(never attended a RH
session), but believe me, in practice it works like that!

create a global resource (and always create a global resource even if
it is a fencing, or a vsftpd resource that every node has in common)
aqnd mount it in every node you need as a service. Do not think an
fstab entry is the better thing you can have, it is not, it can lock
you filesystem till all the nodes are really working and talking one
each other.

-- 
mr

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux