OCFS2 isn't stable yet, I wouldn't suggest its use for production systems. Furthermore, where did the requirement for 2.6 (kernel I assume) come from? Sounds like he was using RHEL3 anyway... I'm not familar with the 9i RAC setup, but I installed a 10g RAC that's in production now. The only thing I use OCFS for is Cluster Registry, Disk Voting, and the Oracle Parameter File. All database nodes have access to the same set of raw LUNs on the SAN managed via ASM (Automated Storage Management, a 10g feature) for keeping actual data on. You could just put all your Oracle files/data on OCFS, but that's not the highest performance solution. To have multiple nodes access a non-Oracle data on a clustered filesystem, go with GFS for sure, ie a normal POSIX complaint filesystem. OCFSv1 isn't POSIX complaint and only properly stores Oracle specific files, and though OCFSv2 is meant to be a generic clustered filesystem, it's not production ready like I mentioned before: http://oss.oracle.com/projects/ocfs2/ How you setup the RAID is completely different issue from how you cluster your data among nodes. -ryan -----Original Message----- From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-owner@xxxxxxxxxxxxxxx]On Behalf Of Lars Marowsky-Bree Sent: Thursday, April 28, 2005 8:07 AM To: miele@xxxxxxxxx Cc: jakob; linux-raid; mingo; bueso Subject: Re: Can you help me on Linux SW RAID? On 2005-04-28T14:57:29, "miele@xxxxxxxxx" <miele@xxxxxxxxx> wrote: > Have you experienced problem using OCFS insetad of OCFS2?? OCFS isn't available on 2.6. On 2.6, you have to use OCFS2. Sincerely, Lars Marowsky-Brée <lmb@xxxxxxx> -- High Availability & Clustering SUSE Labs, Research and Development SUSE LINUX Products GmbH - A Novell Business - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html