Re: CLVMD without GFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well,
 
Pointless, I'm not sure as you take advantage of having all the other nodes in the cluster updated if a LVM metadata is modified by the node holding the VG.
 
Second point, HA-LVM aka hosttags has, IMHO, a security problem as anyone could modify the hosttag on a VG without any problem (no locking mechanisms as CLVM).
 
I have nothing against Clustered FS, but in my specific case, I have to host serveral Sybase Dataservers on some clusters, and the only acceptable option for my DBA's is to use raw devices.
 
I never meant to combine HA-LVM and CLVM, I consider them mutualy exclusive.
 
Regards
 
2009/7/21, Christine Caulfield <ccaulfie@xxxxxxxxxx>:
It seems a little pointless to integrate clvmd with a failover system. They're almost totally different ways of running a cluster. clvmd assumes a symmetrical cluster (as you've found out) and is designed so that the LVs are available on all nodes for a cluster filesystem. Trying to make that sort of system work for a failover installation is always going to be awkward, it's not what it was designed for.

That, in part I think, is why HA-LVM checks for a clustered VGs and declines to manage them. A resource should be controlled by one manager, not two, it's just asking for confusion.

Basically you either use clvmd or HA-LVM; not both together.

If you really want to write a resource manager to use clvmd then feel free, I don't have any references but others might. It's not an area I have ever had to go into.

Good luck ;-)

Chrissie



On 07/21/2009 03:40 PM, brem belguebli wrote:
Hi,
That's what I 'm trying to do.
If you mean lvm.sh, well, I've been playing with it, but it does some
"sanity" checks that are wierd

  1. It expects HA LVM to be setup (why such check if we want to use CLVM).
  2. it exits if it finds a CLVM VG  (kind of funny !)
  3. it exits if the lvm.conf is newer than /boot/*.img (about this
     one, we tend to prevent the cluster from automatically starting ...)

I was looking to find some doc on how to write my own resources, ie CLVM
resource that checks if the vg is clustered, if so by which node is it
exclusively held, and if the node is down to activate exclusively the VG.
If you have some good links to provide me, that'll be great.
Thanks


2009/7/21, Christine Caulfield <ccaulfie@xxxxxxxxxx
<mailto:ccaulfie@xxxxxxxxxx>>:

   On 07/21/2009 01:11 PM, brem belguebli wrote:

       Hi,
       When creating the VG by default clustered, you implicitely
       assume that
       it will be used with a clustered FS on top of it (gfs, ocfs, etc...)
       that will handle the active/active mode.
       As I do not intend to use GFS in this particular case, but ext3
       and raw
       devices, I need to make sure the vg is exclusively activated on one
       node, preventing the other nodes to access it unless it is the
       failover
       procedure (node holding the VG crashed) and then re activate it
       exclusively on the failover node.
       Thanks



   In that case you probably ought to be using rgmanager to do the
   failover for you. It has a script for doing exactly this :-)

   Chrissie


   --
   Linux-cluster mailing list
   Linux-cluster@xxxxxxxxxx <mailto:Linux-cluster@xxxxxxxxxx>
   https://www.redhat.com/mailman/listinfo/linux-cluster



------------------------------------------------------------------------

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux