RE: RHEL+RAC+GFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Udi,

I never did receive an answer on this. Metalink was no help either. What I
believe is that RedHat sells GFS and RHCS to Oracle customers so they can
get the 2k-3k US per node income, I guess I would as well if I was them : ),
they need to eat too. 

WARNING! A couple days ago I found out that RHEL+RAC+GFS is NOT covered
under Oracle?s, ?Unbreakable Support? and they will NOT assist with ANY GFS
issues! 

Here is what I have done thus far as a proof of concept for our 11i
implementation conference room pilots: 

Part A
? Purchased RHEL subscription
? Downloaded GFS and RHCS SRPMs, compiled and installed
? Made a 4 node cluster with ILO fencing and 2 CLVM2/GFS volumes from an EMC
cx300
? Made my staging area for the 11i install

Part A Results
? At first look things seemed fine
? Did basic testing with tools like dd, touch, ls, etc.
? Installed Stage11i, install seemed slow
? Under heavy IO (simultaneous 1G file creation using dd) received kernel
panics, added numa=off to boot string fixed this
? Installed CRP1 on a single node
? CRP1 is operational, but seems sluggish
? Destroyed cluster, and moved CRP1 to a single node cluster, same result
operational but sluggish

Part B
? Made 3 CLVM2/GFS volumes DB/APPS/BACKUP
? Mounted all three volumes on both nodes
? Installed CRP2 with node1 as DB and node 2 as APPS

Part B Results
? Install was slow
? CRP2 was sluggish and after a few hours dlm_sendd became a giant CPU hog,
if the db and apps were bounced (no reboot) things would be OK for a while
? Switched over to the older lock mechanism GULM, but had exact same results
? At this point great disappointment sets in : ( and I reach out to this
mailing list for help, no response(!)

Part C
? I reformatted the db and apps volumes as ext3 but left them managed by
CLVM2 (I never thought to do otherwise)
? I removed the backup volume as a cluster resource, but since I still had
CLVM2 in play I found that I had to have cman, ccsd, and clvm enabled so
everything would work.
? Now, only apps was mounted on the apps node(CLVM+EXT3), db was only
mounted on the db node(CLVM+EXT3), and backup was not mounted.

Part C Results
? Same, CRP2 was sluggish and after a few hours dlm_sendd became a giant CPU
hog, if the db and apps were bounced (no reboot) things would be OK for a
while
? If the backup volume (CLVM+GFS) was mounted it got even worse

Part D
? Destroyed the CLVM setup on the backup volume
? Formatted the entire device (backup volume) as EXT3 without any
partitioning 
? NOTE* the db and apps volumes are RAID 1+0 arrays on fibre channel(fast)
and the backup volume is a RAID 5 ATA array (slow).
? So now the setup is as follows:
	o db node, mounts db volume - fibre channel+CLVM+EXT3
	o apps node, mounts apps volume - fibre channel+CLVM+EXT3
	o apps node shares apps volome to db node via NFS read-only
	o db node, mounts backup volume, - ATA+EXT3

Part D Results
? Same, CRP2 was sluggish and after a few hours dlm_sendd became a giant CPU
hog, if the db and apps were bounced (no reboot) things would be OK for a
while
? Throughput on backup volume is fantastic in comparison!!!

Conclusion
RHEL+RAC+GFS may be possible. However, I have not been able to put together
the recipe, have had no real assistance from outside resources, and think
there is a possible bug in dlm_sendd. Until a true recipe is developed I
cannot personally recommend this configuration regardless of what
http://www.redhat.com/whitepapers/rha/gfs/Oracle9i_RAC_GFS.pdf says. I do
not intend to slight any company or product; it is entirely possible my
results are due to my own ignorance.

Final Note (off-topic and off-list)
Some may be wondering what I plan to do next. I am currently pursuing OCFS2
as a file system and clustering solution. Here is why:
? It is GPL?d and free (as in beer)
? It has freely available binaries for stock RedHat kernels
? It has much in common with EXT3
? It is included in the newer versions lf Linus?s kernel tree
? It will qualify for ?Unbreakable Support?
? It appears to have applications totally outside of the Oracle world, as in
creating a shared root (/) volume and still being able to maintain node
specific configuration files. Cool stuff. 

Happy clustering, I hope some of my months of frustrations are useful to
someone.

-- 
Sean N. Gray
Director of Information Technology
United Radio Incorporated, DBA BlueStar
24 Spiral Drive
Florence, Kentucky 41042
office: 859.371.4423 x3263
toll free: 800.371.4423 x3263
fax: 859.371.4425
mobile: 513.616.3379
________________________________________
From: Yaffe, Udi 
Sent: Sunday, April 16, 2006 5:49 AM
To: Sean Gray
Subject: RHEL+RAC+GFS

Sean,
 
I read your message in the RedHat forum, 14 Mar 2006 (about Oracle Rac on
RedHat, using GFS from the) and curious to know whether you got an answer ? 
I spend the last three weeks looking for a document or any other article on
the web, explaining how to install RAC on GFS, but couldn't find any. if you
do have an answer, can you please give me an advice how to start with ?
 
Regards,
 
      Udi 
      Senior System Engineer - Project delivery
 


--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux