Re: GFS + Oracle storage hardware suggestions?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That setup is possible.  It adds extra servers and more things to go wrong
though.  Plus GNBD would likely cause a hit in performance.  Direct
attached storage (SCSI) units have been ruled out as an option since we
use a lot of them now and we've had lot of problems.  Every so often a
unit will just flip out and take all the drives offline.  You bring them
all back online and they go back to working fine.  These are all Dell
powervaults.  Dell support says "oh yeah, that happens sometimes.. one
drive will hicup and cause a cascading failure for the other drives on
that bus..."  We had another non-Dell DAS unit do something similiar.  So
now when we use them it's only in an md raid1 across two hardware raid5
units.  For this we are looking at SANs which hopefully don't have this
problem.

y f said:
> can the GFS be used in multiple gnbd server mode, so we can build it
> all in IP network to gain cheap cost ?
>
> I show my idea in the picture attached.
>
> On 8/25/05, Matt Goebel <mgoebel@xxxxxxxxxxxxxxxxxxxxx> wrote:
>> I am in the process of specing out a high availability Oracle database
>> solution and need some advice from those of you experienced in doing
>> this
>> as to what storage hardware to get.
>>
>> The plan right now is to have 3 Oracle 9i or 10g nodes (2 failover)
>> running on Redhat Enterprise 3.5 or 4.1 with a shared filesystem for
>> Oracle via GFS (6.0 or 6.1 depending on which version of RHEL)  We'd
>> also
>> like to use multipathing and 2 mirrored SANS of some sort.  The
>> application we will be running requires very very little in terms of
>> storage space, only ~3GB per year, per database.  DB load will also
>> probably not be that high.  Uptime is critical.  So for hardware I have
>> been looking into the following:
>>
>> iSCSI SAN: I've tested a low end (nice price) EMC AX100i /w GFS 6.0
>> (using
>> GULM) and RHEL 3.5 (4.1 doesn't have iSCSI working yet).  Performance
>> was
>> awful... 2.5-5 MB/s writes, 25-30 MB/s reads.  So it looks like iSCSI is
>> out of the question unless there is another hardware option that would
>> give me the performance I'd want?
>>
>> Fibre Channel SAN: I've been trying to avoid using a FC solution if I
>> can
>> because of cost but if it's what I need it's what I have to get.  Any
>> suggestions on this?  Are these reliable enough to safely use just one
>> SAN
>> in a 99% uptime environment?  Any good entry/mid model's to look at?
>>
>> AOE (Coraid.com):  This looks to be a perfect solution.. low cost and
>> potentially decent performance.  It's relatively new and I haven't heard
>> of anyone using it yet though.
>>
>> There doesn't seem to be much out there to tell me what sort I can
>> expect
>> out of these with GFS...  Any info would be helpful.
>>
>> --
>> 
>> Linux-cluster@xxxxxxxxxx
>> http://www.redhat.com/mailman/listinfo/linux-cluster
>>
> --
> 
> Linux-cluster@xxxxxxxxxx
> http://www.redhat.com/mailman/listinfo/linux-cluster


--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux