Re: Some questions - migrating from Sun to Red Hat cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 1 Nov 2010 22:45:11 -0600, Ivan Fetch <ifetch@xxxxxx> wrote:
Hello,

I have been using two CentOS 5.5 virtual machines, to learn Linux
clustering, as a potential replacement for Sun (Sparc) clusters. We
run Red Hat Enterprise Linux, but do not yet have any production
cluster experience. I've got a few questions, which I'm stuck on:

IS it possible to stop or restart one resource, instead of the entire
resource group (service)? This can be handy when you want to work on a
resource (Apache), without having cluster restart it out from under
you, but you still want your storage and IP to stay online. It seems
like the clusvcadm command only operates on services; groups of
resources.


I don't know if this is the officially sanctioned way, but I tend to freeze the group/service (clusvcadm -Z) and then use the start/stop service script (service httpd reload, etc) to manipulate the daemons. I've got a multi-daemon mail server service that brings up postfix + amavisd + sqlgrey, ++ so this is handy here).

What is the most common way to create and adjust service definitions
- using Lusi, editing cluster.conf by hand, using command-line tools,
or something else?


I'm a die-hard CLI guy, so I tend to prefer editing by hand & validating the cluster.conf file before loading it/using it (had a couple of typo's that caused me grief as far as keeping things running goes).

For a non-global filesystem, which follows a service, is HA LVM the
way to go? I have seen some recommendations against HA LVM, because
LVM tagging being reset on a node, can allow that node to touch the
LVM out-of-turn.

What is the recommended way to make changes to an HA LVM, or add a
new HA LVM, when lvm.conf on cluster nodes are already configured to
tag? I have accomplished this by temporarily editing lvm.conf on one
node, removing the tag line, and then making necessary changes to the
LVM - it seems like there is likely a better way to do this.

Will the use of a quarum disk, help to keep one node from fensing the
other at boot (E.G> node1 is running, node2 boots and fenses node1)?
This fensing does not happen every time I boot node2 - I may need to
reproduce this and provide logs.

I think, perhaps, you may need/want the <fence_daemon clean_start="1"> included so as to avoid this? IIRC, setting clean_start helped me avoid fencing of the surviving node at restart.

I use the quorum disk to ensure less confusion by the nodes during reboot scenarios too though.

hth,

// Thomas

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux