Re: GFS2 vs EXT3+HA-LVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Rafael,

Thanks for the reply.  So far the set up is behaving as promised for me although I've not tried to create any new volumes since the last time I reboted so I'll mark that one for day after tomorrow when I am back at work. Could very well be that I have it all wrong with respect to locking_type=1. Actually reading the docs a bit closer indicates that you are right. I'll check still

Thanks


-Corey





On Tue, Sep 29, 2009 at 8:21 PM, Rafael Micó Miranda <rmicmirregs@xxxxxxxxx> wrote:
Hi Corey,

El dom, 27-09-2009 a las 01:48 +0100, Corey Kovacs escribió:
> clvmd is still used, basically it just makes sure the lvm changes are
> propagated to all nodes. The change is in the /etc/lvm/lvm.conf where
> locking_type=1 instead of 3 as is for GFS1/2. If I go this route,
> there will be no use of GFS at all on this cluster. locking_type=1
> along with the volume_list config options are used to ensure that no
> two nodes have the same VG mounted.
>
> Of course this method is new to me so my understanding of how lvm2
> works with locking_type set to one works in conjunction with clvmd
> running could be incorrect.
>
> As always, comments are appreciated.
>
> Corey
>

>
>
First, sorry for being late. I marked it as a "to read" topic, but i did
not read it until now.

>From your interest in EXT3+HA-LVM configuration, I understand you need a
high-availability solution for your service, but you don't need
concurrent access to the filesystem.

I found the same problems as you on GFS2 performance, being far away
from the results made by EXT3. I have also tested XFS filesystems in
this situations with even better performance (and now in RHEL 5.4 XFS
filesystem is introduced as an Technological Preview, so we can expect
it to be ready for mission critical usage in 5.5 or so).

I studied the HA-LVM solution but i found it "ugly" in terms of
administration. Then i chose the CLVM and tried to find a way to
guarantee access to volumes only by one node in the cluster, avoiding
administrator mistakes and mountings of non-clustered filesystems in
more than one node at the same time.

There was an "undesired behaviour" in the LVM "exclusive" flag, which
Brem submitted to the bugzilla (thanks again). If fixed, I hope a
RGMANAGER resource script I submitted could be into the project to
implement this LVM "exclusive" usage.

If you don't need the access to storage in a high availability solution
(handled by a cluster software) i encourage you to check this LVM
"exclusive" option by hand, without integrating it into RGMANAGER. For
testing purposes it should be ok. I will also recommend you to try XFS
filesystem on top of it. I can give you some instructions if you need.

If you need the access to storage in a high availability solution, you
should try the LVM resources included in RGMANAGER. Also try with XFS on
top of it.

About the "locking_type = 1" into CLVM issue: i did not even think that
it would be possible to use it. I would expect CLVM not propagatingRafael
changes if set to 1. Have you done any tests about this? Is the
configuration working as you expected?

Cheers,

Rafael

--
Rafael Micó Miranda

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux