Re: CLVM clarification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, this could be clear, but in the Cluster_Logical_Volume_Manager.pdf I've read (bottom of page 3):

"The clmvd daemon is the key clustering extension to LVM. The clvmd daemon runs in each cluster computer and distributes LVM metadata updates in a cluster, presenting each cluster computer with the same view of the logical volumes"

This is a picture of wath I have in mind:

-----------------------------------------
|      GFS filesystem            |
-----------------------------------------
|            LV                         |
-----------------------------------------
|            VG                        |
-----------------------------------------
|  PV1     |  PV2     |   PV3   |
-----------------------------------------
| GNBD1 | GNBD2 | GNBD3 |
-----------------------------------------
| hda1     |  hda1    |   hda1   |
| Node1   | Node2  |   Node3 |
-----------------------------------------

In this case the clvm features are not useful because there is only one machine (that could not be a node of a cluster) that have the lvm over GNBD exported devices. So the nodes doesn't know nothing about the other nodes.

Let's pose this situation:

-----------------------------------------------
|            GFS                           |
-----------------------------------------------
|                LV                          |
-----------------------------------------------
|         VG1         |        VG2      |
-----------------------------------------------
|         PV1         |        PV2       |
|      Node1         |      Node2      |
-----------------------------------------------
|        CLVM coordinates           |
-----------------------------------------------

In this situatuation makes sense to have a clustered lvm because if I have to make some maintenance over VGs, CLVM can lock and unlock the interested device.

Is this the correct behaviour??

In the contrary, which is the CLVM role in a cluster?
                         

2008/10/2 Xavier Montagutelli <xavier.montagutelli@xxxxxxxxx>
On Wednesday 01 October 2008 17:39, Angelo Compagnucci wrote:
> Hi to all,This is my first post on this list. Thanks in advance for every
> answer.
>
> I've already read every guide in this matter, this is the list:
>
> Cluster_Administration.pdf
> Cluster_Logical_Volume_Manager.pdf
> Global_Network_Block_Device.pdf
> Cluster_Suite_Overview.pdf
> Global_File_System.pdf
> CLVM.pdf
> RedHatClusterAdminOverview.pdf
>
> The truth is that I've not clear a point about CLVM.
>
> Let's me make an example:
>
> In this example CLVM and the Cluster suite are fully running without
> problems. Let's pose the same configuration of cluster.conf and lvm.conf
> and the nodes of the cluster are joined and operatives.

Does your example include a shared storage (GNBD, iSCSI, SAN, ...) ?

>
> NODE1:
>
> pvcreate /dev/hda3
>
> NODE2:
>
> pvcreate /dev/hda2
>
> Let's pose that CLVM spans LVM metadata across the cluster, if I stroke the
> command:
>
> pvscan
>
> I should see /dev/sda2 and /dev/sda3
>
> and then I can create a vg with
>
> vgcreate /dev/sda2 /dev/sda3 ...
>
> The question is: How LVM metadata sharing works? I have to use GNBD on the
> row partion to share a device between nodes? I can create a GFS over a
> spanned volume group? Are shareable only logical volumes?

I have the feeling that something is not clear here. I am not an expert, but :

GNBD is just a mean to export a block device on the IP network. A GNBD device
is accessible to multiple nodes at the same time, and thus you can include
that block device in a CLVM Volume Group. Instead of GNBD, you can also use
any other shared storage (iSCSI, FC, ...). Be careful, from what I have
understood, some SAN storage are not sharable between many hosts (NBD, AoE
for example) !

After that, you have the choice :

 - to make one LV with a shared filesystem (GFS). You can then mount the same
filesystem on many nodes at the same time.

 - to make many LV with an ext3 / xfs / ... filesystem. But you then have to
make sure that one LV is mounted on only one node at a given time.

But the type of filesystem is independant, this is a higher component.

In this picture, CLVM is only a low-level component, avoiding the concurrent
access of many nodes on the LVM metadata written on the shared storage.

The data are not "spanned" across the local storage of many nodes (well, I
suppose you *could* do that, but you would need other tools / layers ?)

Other point : if I remember correctly, the Red Hat doc says it's not
recommended to use GFS on a node that exports a GNBD device. So if you use
GNBD as a shared storage, I suppose it's better to specialize one or more
nodes as GNBD "servers".


HTH

>
> Thanks for your answers!!

--
Xavier Montagutelli                      Tel : +33 (0)5 55 45 77 20
Service Commun Informatique              Fax : +33 (0)5 55 45 75 95
Universite de Limoges
123, avenue Albert Thomas
87060 Limoges cedex

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux