Re: falied to implement HA-LVM with clvmd rhcs5.6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, Feb 3, 2011 at 12:35 PM, ×××× ×××× <sklemer@xxxxxxxxx> wrote:

Â

https://access.redhat.com/kb/docs/DOC-3068


On Thu, Feb 3, 2011 at 11:13 AM, Corey Kovacs <corey.kovacs@xxxxxxxxx> wrote:
Is using ha-lvm with clvmd a new capability? It's always been my
understanding that the lvm locking type for using ha-lvm had to be set
to '1'.

I'd much rather be using clvmd if it is the way to go. Can you point
me to the docs you are seeing these instructions in please?

As for why your config isn't working, clvmd requires that it's
resources are indeed tagged as cluster volumes, so you might try doing
that and see how it goes.

-C

On Thu, Feb 3, 2011 at 7:26 AM, ×××× ×××× <sklemer@xxxxxxxxx> wrote:
> Hello.
>
>
>
> I followed redhat instruction trying install HA-LVM with clvmd. ( rhcs 5.6 -
> rgmanager 2.0.52-9 )
>
>
>
> I can't make it work.
>
>
>
> lvm.conf- locking_type=3
>
> clvmd work
>
> Its failed saying HA-LVM is not configured correctly.
>
> The manual said that we should run "lvchange -a n lvxx" edit the
> cluster.conf & start the service.
>
>
>
> But From lvm.conf :
>
>
>
> case $1 in
>
> start)
>
> ÂÂ Â Â Âif ! [[ $(vgs -o attr --noheadings $OCF_RESKEY_vg_name) =~ .....c
> ]]; then
>
> ÂÂ Â Â Â Â Â Â Âha_lvm_proper_setup_check || exit 1
>
>
>
> If the vg is not taged as cluster than the ha_lvm is looking for volume_list
> in lvm.conf.
>
>
>
> I am confused- Does the VG should taged as cluster ?? Â( BTW - the old
> fashion HA-LVM is worked with no problems )
>
> redhat instructions :
>
> To set up HA LVM Failover (using the preferred CLVM variant), perform the
> following steps:
>
>
>
> 1. Ensure that the parameterÂlocking_typeÂin the global section
> ofÂ/etc/lvm/lvm.confÂis set to the valueÂ'3', that all the necessary LVM
> cluster packages are installed, and the necessary daemons are started (like
> 'clvmd' and the cluster mirror log daemon - if necessary).
>
>
>
> 2. Create the logical volume and filesystem using standard LVM2 and file
> system commands. For example:
>
> # pvcreate /dev/sd[cde]1
>
> Â# vgcreate <volume group name> /dev/sd[cde]1
>
> Â# lvcreate -L 10G -n <logical volume name> <volume group name>
>
> Â# mkfs.ext3 /dev/<volume group name>/<logical volume name>
>
> Â# lvchange -an <volume group name>/<logical volume name>
>
>
>
> 3. Edit /etc/cluster/cluster.conf to include the newly created logical
> volume as a resource in one of your services. Alternatively, configuration
> tools such asÂCongaÂorÂsystem-config-clusterÂmay be used to create these
> entries. Below is a sample resource manager section
> fromÂ/etc/cluster/cluster.conf:
>
>
>
> <rm>Â ÂÂÂ <failoverdomains> ÂÂÂÂÂÂ <failoverdomain name="FD" ordered="1"
> restricted="0"> ÂÂÂÂÂÂÂÂÂ <failoverdomainnode name="neo-01" priority="1"/>
> ÂÂÂÂÂÂÂÂÂ <failoverdomainnode name="neo-02" priority="2"/>
> </failoverdomain> ÂÂ </failoverdomains> ÂÂ <resources> ÂÂÂÂÂÂ <lvm
> name="lvm" vg_name="shared_vg" lv_name="ha-lv"/> ÂÂÂÂÂÂ <fs name="FS"
> device="/dev/shared_vg/ha-lv" force_fsck="0" force_unmount="1" fsid="64050"
> fstype="ext3" mountpoint="/mnt" options="" self_fence="0"/> ÂÂ </resources>
> ÂÂ <service autostart="1" domain="FD" name="serv" recovery="relocate">
> ÂÂÂÂÂÂ <lvm ref="lvm"/> ÂÂÂÂÂÂ <fs ref="FS"/> ÂÂ </service> </rm>
>
>
>
> Regards
>
> Shalom.
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


Title: access.redhat.com | Red Hat Knowledgebase: What is a Highly Available LVM (HA-LVM) configuration and how do I implement it?
Skip to navigation

Knowledgebase
Currently Being Moderated

What is a Highly Available LVM (HA-LVM) configuration and how do I implement it?

Article ID: 3068 - Created on: May 28, 2008 6:00 PM - Last Modified:  Jan 18, 2011 10:52 AM

Issue

 

Uncontrolled simultaneous access to shared storage can lead to data corruption.  Storage access must be managed like any other active/passive service - it must only be active on a single machine at a time.

Environment

  • Red Hat Enterprise Linux 4.5+
  • Red Hat Enterprise Linux 5
  • Red Hat Enterprise Linux 6

Resolution

As of Red Hat Enterprise Linux 4.5, there is support in rgmanager for highly-available LVM volumes (HA-LVM) in a failover configuration.  This is distinct from active/active configurations enabled by Clustered LVM (CLVM).  When to use HA-LVM or CLVM should be based on the needs of the applications or services being deployed.  If the applications are cluster-aware and have been tuned to run simultaneously on multiple machines at a time, then CLVM should be used.  If the applications run optimally in active/passive (failover) configurations, then HA-LVM is the correct choice.  Most applications will run better in an active/passive configuration, as they are not designed or optimized to run concurrently with other instances.  Choosing to run an application that is not cluster-aware on clustered logical volumes may result in degraded performance if the logical volume is mirrored or snapshotted.  This is because there is cluster communication overhead for the logical volumes themselves in these instances.  A cluster-aware application must be able to acheive performance gains above the performance losses introduced by cluster file systems and cluster-aware logical volumes.  This is achievable for some applications and workloads more easily than others.  Determining what the requirements of the cluster are and whether the extra effort toward optimizing for an active/active cluster will pay dividends is the way to choose between the two LVM variants.  Most users will achieve the best HA results from using HA LVM.

 

HA LVM and CLVM are similar in the fact that they prevent corruption of LVM metadata and its logical volumes, which could otherwise occur if multiple machines where allowed to make overlapping changes.  HA LVM imposes the restriction that a logical volume can only be activated exclusively; that is, active on only one machine at a time.  This means that only local (non-clustered) implementations of the storage drivers are used.  Avoiding the cluster coordination overhead in this way increases performance.  CLVM does not impose these restrictions.  A user is free to activate a logical volume on all machines in a cluster.  This forces the use of cluster-aware storage drivers, which allow for cluster-aware file systems and applications to be put on top.

 

HA LVM can be setup to use one of two methods for achieving its mandate of exclusive logical volume activation.  The first method uses local machine locking and LVM "tags".  This method is available to RHEL4.5+, RHEL5, and RHEL6.  It has the advantage of not requiring any LVM cluster packages; however, there are more steps involved in setting it up and it does not prevent an admin from mistakenly removing a logical volume from a node in the cluster where it is not active.  The second method uses CLVM, but will only ever activate the logical volumes exclusively.  This has the advantage of easier setup and better prevention of administrative mistakes (like removing a logical volume that is in use); however, it does require all necessary cluster LVM packages.  The CLVM variant is available to RHEL5.6+ and RHEL6 - it is the preferred method.

 

To set up HA LVM Failover (using the preferred CLVM variant), perform the following steps:

 

1. Ensure that the parameter locking_type in the global section of /etc/lvm/lvm.conf is set to the value '3', that all the necessary LVM cluster packages are installed, and the necessary daemons are started (like 'clvmd' and the cluster mirror log daemon - if necessary).

 

2. Create the logical volume and filesystem using standard LVM2 and file system commands. For example:

# pvcreate /dev/sd[cde]1

# vgcreate <volume group name> /dev/sd[cde]1

# lvcreate -L 10G -n <logical volume name> <volume group name>

# mkfs.ext3 /dev/<volume group name>/<logical volume name>

# lvchange -an <volume group name>/<logical volume name>

 

3. Edit /etc/cluster/cluster.conf to include the newly created logical volume as a resource in one of your services. Alternatively, configuration tools such as Conga or system-config-cluster may be used to create these entries.  Below is a sample resource manager section from /etc/cluster/cluster.conf:

 

<rm>  
   <failoverdomains>
       <failoverdomain name="FD" ordered="1" restricted="0">
          <failoverdomainnode name="neo-01" priority="1"/>
          <failoverdomainnode name="neo-02" priority="2"/>
       </failoverdomain>
   </failoverdomains>
   <resources>
       <lvm name="lvm" vg_name="shared_vg" lv_name="ha-lv"/>
       <fs name="FS" device="/dev/shared_vg/ha-lv" force_fsck="0" force_unmount="1" fsid="64050" fstype="ext3" mountpoint="/mnt" options="" self_fence="0"/>
   </resources>
   <service autostart="1" domain="FD" name="serv" recovery="relocate">
       <lvm ref="lvm"/>
       <fs ref="FS"/>
   </service>
</rm>

 

 

To set up HA LVM Failover (using the original method), perform the following steps:

 

1. Ensure that the parameter locking_type in the global section of /etc/lvm/lvm.conf is set to the value '1'.

 

2. Create the logical volume and filesystem using standard LVM2 and file system commands. For example:

# pvcreate /dev/sd[cde]1

# vgcreate <volume group name> /dev/sd[cde]1

# lvcreate -L 10G -n <logical volume name> <volume group name>

# mkfs.ext3 /dev/<volume group name>/<logical volume name> 

 

3. Edit /etc/cluster/cluster.conf to include the newly created logical volume as a resource in one of your services. Alternatively, configuration tools such as Conga or system-config-cluster may be used to create these entries.  Below is a sample resource manager section from /etc/cluster/cluster.conf:

 

<rm>  
   <failoverdomains>
       <failoverdomain name="FD" ordered="1" restricted="0">
          <failoverdomainnode name="neo-01" priority="1"/>
          <failoverdomainnode name="neo-02" priority="2"/>
       </failoverdomain>
   </failoverdomains>
   <resources>
       <lvm name="lvm" vg_name="shared_vg" lv_name="ha-lv"/>
       <fs name="FS" device="/dev/shared_vg/ha-lv" force_fsck="0" force_unmount="1" fsid="64050" fstype="ext3" mountpoint="/mnt" options="" self_fence="0"/>
   </resources>
   <service autostart="1" domain="FD" name="serv" recovery="relocate">
       <lvm ref="lvm"/>
       <fs ref="FS"/>
   </service>
</rm>

 

 

Note: If there are multiple logical volumes in the volume group, then the Logical Volume name (lv_name) in the lvm resource should be left blank or unspecified.  The ability to have multiple logical volumes in a single volume group in HA-LVM became available as of Red Hat Enterprise Linux 4.7 (rgmanager-1.9.80-1) and 5.2 (rgmanager-2.0.38-2).  Also note that in an HA-LVM configuration, a volume group may only be used by a single service.

 

4. Edit the volume_list field in /etc/lvm/lvm.conf. Include the name of your root volume group and your hostname as listed in /etc/cluster/cluster.conf preceded by @. Note that this string MUST match the node name given in cluster.conf.  Below is a sample entry from /etc/lvm/lvm.conf:

volume_list = [ "VolGroup00", "@neo-01" ]

 

This tag will be used to activate shared VGs or LVs. DO NOT include the names of any volume groups that are to be shared using HA-LVM.

 

5. Update the initrd on all your cluster nodes. To do this, use the following command:

# mkinitrd -f /boot/initrd-$(uname -r).img $(uname -r)

 

6. Reboot all nodes to ensure the correct initrd is in use.

Feedback from users like yourself is a critical factor in helping us make the Red Hat Knowledgebase as useful as possible.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux