Re: CLVM clarification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wednesday 01 October 2008 17:39, Angelo Compagnucci wrote:
> Hi to all,This is my first post on this list. Thanks in advance for every
> answer.
>
> I've already read every guide in this matter, this is the list:
>
> Cluster_Administration.pdf
> Cluster_Logical_Volume_Manager.pdf
> Global_Network_Block_Device.pdf
> Cluster_Suite_Overview.pdf
> Global_File_System.pdf
> CLVM.pdf
> RedHatClusterAdminOverview.pdf
>
> The truth is that I've not clear a point about CLVM.
>
> Let's me make an example:
>
> In this example CLVM and the Cluster suite are fully running without
> problems. Let's pose the same configuration of cluster.conf and lvm.conf
> and the nodes of the cluster are joined and operatives.

Does your example include a shared storage (GNBD, iSCSI, SAN, ...) ?

>
> NODE1:
>
> pvcreate /dev/hda3
>
> NODE2:
>
> pvcreate /dev/hda2
>
> Let's pose that CLVM spans LVM metadata across the cluster, if I stroke the
> command:
>
> pvscan
>
> I should see /dev/sda2 and /dev/sda3
>
> and then I can create a vg with
>
> vgcreate /dev/sda2 /dev/sda3 ...
>
> The question is: How LVM metadata sharing works? I have to use GNBD on the
> row partion to share a device between nodes? I can create a GFS over a
> spanned volume group? Are shareable only logical volumes?

I have the feeling that something is not clear here. I am not an expert, but :

GNBD is just a mean to export a block device on the IP network. A GNBD device 
is accessible to multiple nodes at the same time, and thus you can include 
that block device in a CLVM Volume Group. Instead of GNBD, you can also use 
any other shared storage (iSCSI, FC, ...). Be careful, from what I have 
understood, some SAN storage are not sharable between many hosts (NBD, AoE 
for example) !

After that, you have the choice : 

 - to make one LV with a shared filesystem (GFS). You can then mount the same 
filesystem on many nodes at the same time.

 - to make many LV with an ext3 / xfs / ... filesystem. But you then have to 
make sure that one LV is mounted on only one node at a given time.

But the type of filesystem is independant, this is a higher component.

In this picture, CLVM is only a low-level component, avoiding the concurrent 
access of many nodes on the LVM metadata written on the shared storage.

The data are not "spanned" across the local storage of many nodes (well, I 
suppose you *could* do that, but you would need other tools / layers ?)

Other point : if I remember correctly, the Red Hat doc says it's not 
recommended to use GFS on a node that exports a GNBD device. So if you use 
GNBD as a shared storage, I suppose it's better to specialize one or more 
nodes as GNBD "servers".


HTH

>
> Thanks for your answers!!

-- 
Xavier Montagutelli                      Tel : +33 (0)5 55 45 77 20
Service Commun Informatique              Fax : +33 (0)5 55 45 75 95
Universite de Limoges
123, avenue Albert Thomas
87060 Limoges cedex

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux