Re: Info on lvm setup for cluster without clvmd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 8 Apr 2009 19:37:51 +0200 Nemeth, Norber wrote:
> Have you checked this? : http://sources.redhat.com/cluster/wiki/LVMFailover
Thanks, I then checked it and also the "official" RH EL knowledge base at
http://kbase.redhat.com/faq/docs/DOC-3068
that describes just the same things.

I've done some tests and it seems to work quite well.
only thing I would post a bugzilla entry because there is a "unhappy"
check inside /usr/share/cluster/lvm.sh
so that if you edit on a node the file lvm.conf (for example I edited
it to set activation=1 for more debugging)
then you will not be able to relocate an HALVM based service on it (at
least till your next kernel update....)
The code is:

        # Next, we need to ensure that their initrd has been updated
        # If not, the machine could boot and activate the VG outside
        # the control of rgmanager
        ##
        # Fixme: we might be able to perform a better check...
        if [ "$(find /boot -name *.img -newer /etc/lvm/lvm.conf)" == "" ]; then
                ocf_log err "HA LVM:  Improper setup detected"
                ocf_log err "- initrd image needs to be newer than lvm.conf"
                return $OCF_ERR_GENERIC
        fi

Apart from the unhappy check itself, it seems to not write the second
debug line in /var/log/messages.....
In fact I only get
Apr 24 13:01:19 orastud2 clurgmgrd[6809]: <notice> Starting stopped
service service:DWHSRV
Apr 24 13:01:19 orastud2 clurgmgrd: [6809]: <err> HA LVM:  Improper
setup detected
Apr 24 13:01:19 orastud2 clurgmgrd[6809]: <notice> start on lvm
"DWH_APPL" returned 1 (generic error)
Apr 24 13:01:19 orastud2 clurgmgrd[6809]: <warning> #68: Failed to
start service:DWHSRV; return value: 1
Apr 24 13:01:19 orastud2 clurgmgrd[6809]: <notice> Stopping service
service:DWHSRV

and not the message regarding
- initrd image needs to be newer than lvm.conf

I presume that the leading "minus" could be the problem, as if I
change lvm.sh and put
ocf_log err "initrd image needs to be newer than lvm.conf"

at least corectly I get
Apr 24 13:13:19 orastud2 clurgmgrd[6809]: <notice> Starting stopped
service service:DWHSRV
Apr 24 13:13:20 orastud2 clurgmgrd: [6809]: <err> HA LVM:  Improper
setup detected
Apr 24 13:13:20 orastud2 clurgmgrd: [6809]: <err> initrd image needs
to be newer than lvm.conf
Apr 24 13:13:20 orastud2 clurgmgrd[6809]: <notice> start on lvm
"DWH_APPL" returned 1 (generic error)
Apr 24 13:13:20 orastud2 clurgmgrd[6809]: <warning> #68: Failed to
start service:DWHSRV; return value: 1
Apr 24 13:13:20 orastud2 clurgmgrd[6809]: <notice> Stopping service
service:DWHSRV

At this point it means that the second line should change for all the
other tests inside the script......

You can test this using the touch command on lvm.conf for example and
trying a relocation of a service to that node.
Then if you touch any .img file inside /boot directory, you are able
to relocate again....
umh...

Thanks anyway for the original pointer....

Gianluca
On Wed, Apr 8, 2009 at 5:19 PM, Gianluca Cecchi
<gianluca.cecchi@xxxxxxxxx> wrote:
> Hello,
> I would like to setup a two-node cluster where I will have some
> services relying on filesystems on lvm resources.
> I'm using rh el 5U3 but I only have entitlements for RHEL Clustering
> and not for Cluster-Storage, so that I cannot use clvmd as in other
> clusters I set up previously.
> I don't need GFS so I think this is the correct setup for me, also
> from a legal point of view
> From a documentation point of view I see:
> Note:
> Shared storage for use in Red Hat Cluster Suite requires that you be
> running the cluster
> logical volume manager daemon (clvmd) or the High Availability Logical Volume
> Management agents (HA-LVM). If you are not able to use either the
> clvmd daemon or
> HA-LVM for operational reasons or because you do not have the correct
> entitlements, you
> must not use single-instance LVM on the shared disk as this may result
> in data corruption.
> If you have any concerns please contact your Red Hat service representative.
>
> I suppose that with HA-LVM here we are talking about lvm.sh script
> (with the other two lvm_by_lv.sh and lvm_by_vg.sh referred into it)
> inside /usr/share/cluster/ directory.
>
> At the moment the cluster is not formed at all; I only have setup a
> basic cluster.conf without the lvm resources in it.
> I have setup volume groups and logical volumes on one node. The disks
> are seen by the other node too, but correctly it is not seeing the LVM
> parts
> In lvm.conf of both I have
> locking_type = 1
> and I think it should remain this way in my setup, correct?
>
> Which is the correct approach in my situation if I want to go straight
> with command line and not graphical tools?
> In old days when lvm was not cluster-aware I had to:
> vgchange -an VG on the first node and then vgchange -ay VG on the
> second to let it create the devices and the lvm cache for the first
> time after creation.
>
> After this, in lvm.sh script it seems I have to populate the
> "activation" section filling up the volume_list part, to prevent
> concurrent activation of volume groups used by the cluster.
>
> Any good documentation reference for this? It seems I didn't find anything....
>
> I thank all the gui creators (Conga and system-config-cluster), and I
> think there is a good audience for them, but I would like to be able
> at least to do it manually too.... I think it helps very much to
> understand internal mechanisms and eventually debugging when problems
> arise...
>
> Thanks in advance,
> Gianluca
>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux