Re: General 2-node cluster questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Riaan and Lon,

Thanks for your replies. The App we use is called PRO5 and uses SSH to
run. The users SSH into the RHEL4 box, and their .bash_profile fires up
the PRO5 app which is located at /basis/pro5/pro5. Their .bash_profile
files look like this:

============================
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH
unset USERNAME
umask 0000
TERM=vt220;export TERM
TERMCAP=/basis/pro5/termcap;export TERMCAP
cd /basis/pro5
./pro5 -tT001 /live/cf.src/PGMSYS9999
exit
============================

Since I will be using an active/passive config in this scenario, would I
be able to install both PRO5 and it's data on an ext3 partition located
on the SAN? Would I even need to have a GFS partition at all? Obviously
SSH would run locally on each node. 

Thanks again,

Brad Filipek

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Riaan van Niekerk
Sent: Wednesday, January 10, 2007 2:53 AM
To: linux clustering
Subject: [<SPAM HC>] - Re:  General 2-node cluster
questions - Email found in subject



Brad Filipek wrote:
> I am in the process of setting up a 2-node cluster with a SAN for data

> storage. I have a few general questions as this is my first time using

> RHEL CS.
> 
>  
> 
> I have two boxes with RHEL4U4 and one application. Should the app be 
> installed locally on both nodes, and have the data on the SAN? Or
should 
> the app and the data both be on the SAN? This will be an
active/passive 
> config.
> 

It is up to you to decide if you want the app on the SAN or not.

App locally installed

If the App is simple and/or part of the OS FS hierarchy (e.g. apache, or

not in /opt), you can install/configure it on node1 and copy the 
configuration across (keeping in mind that you need to manually keep the

configs in sync)

App installed to shared storage

(for illustration purposes I will use Oracle as the clustered app)

If the App is complex and goes into a distinct directory you can 
partition off (e.g. ORACLE_HOME somewhere in /opt/oracle), it might make

more sense to have the whole /opt/oracle on the SAN aswell.

Any files that belong to or are required by the application (e.g 
/etc/oratab, initscripts) would have to be copied across manually from 
one node to the other. If you don't have an easy way of determining 
which files are located outside of the shared partition, you might have 
to install the app twice (once on each side), but it might get confused 
on the second node since it may get confused by the install done on
node1.

>  
> 
> Also, does the app and data both need to sit on a GFS?
> 

For active-passive: they don't need to, but they can be. If you are 
using an active-passive cluster and only one node at a time will have 
the application running (and writing to the partition with the data on),

you can use ext3.

Ext3 is a lot faster than GFS since it does not require the 
overhead/complexity of a clustered file system.

also - a tip - before you configure the app as a clustered service in 
rgmanager, make sure that it starts up flawlessly on both sides (after 
manually moving the VIP and filesystem resources from one node to the 
other). Otherwise, after you configure the app in rgmanager and things 
dont work, you may have to troubleshoot both the app startup and
rgmanager.

greetings
Riaan

Confidentiality Notice: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. 

If you have received this communication in error, please notify us immediately by email reply or by telephone and immediately delete this message and any attachments.


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux