Re: [Linux-cluster] cluster architecture

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Raw throughput isn't really an issue for us. We're more interested in seek times. My biggest concern with GFS is stability and performance...any feedback in regards to that would be greatly appreciated. Thanks!

Rick Stevens wrote:
vahram wrote:

Hi all,

I'm planning to put together a production web server farm that will consist of at least 6 servers. They will all be running Apache and Postfix, and will be sharing a 4+TB storage device. Horizontal scalability is a major issue for us.

I just wanted to get some general recommendations on who to go with for our storage needs. We were considering a Netapp appliance, but the cost is extremely high and their solution is probably a bit overkill for our needs. Cost is a major issue for us.

How does the performance of a Netapp appliance running NFS compare to a fibre-based storage device (such as an Apple XServe RAID or similar unit) running GFS? Is anyone here running GFS on a production server farm? Thanks!


We use NetApps a lot.  Their performance is terrific, but it is NFS over
gigabit ethernet with all that entails and isn't as high as it would be
on a SAN or other block-level device (this is true for any NAS).

I will say that NetApps are bulletproof, easy to expand and software
updates are very, very simple.  Licensing is not cheap, but the fact you
can run CIFS and NFS simultaneously is a plus.  Yes, they cost money,
but you get what you pay for.  You could simulate a NetApp by getting a
really beefy server with a FC or SCSI SAN attached to it and making it
an NFS (and possibly Samba) server.  I won't swear to what kind of
performance you'd get, but you could possibly get 80% of wire speed,
depending on your network architecture and other features.

If you're using any NFS or NAS as a common file system, make sure you
have "noac" set for the mounts or you may miss files put on the storage
by other systems.  Unfortunately, this eats into performance, but that's
the nature of the beast.

As far as SANs are concerned, you'll probably need a fiberchannel system
for 6 nodes unless you can find a 6-port SCSI unit (doubtful).  If you
choose FC, you'll need to think about the switch fabric and whether you
will have to deal with multipathing.  If that's true, you have to make
sure your vendor has multipathing modules for your kernel.  You also
need to look at bandwidth and whether the SAN you're looking at can
sustain the I/O bandwidth you want.  You also need to figure out how
you're going to share that storage among the nodes in the cluster.

We are evaluating several fairly large SANs for use with GFS, but our
bandwidth needs are a bit, well, over-the-top.  We need 9Gbps aggregate
throughput.  We're looking at IBM as well as Hitachi FC SAN solutions.
----------------------------------------------------------------------
- Rick Stevens, Senior Systems Engineer     rstevens@xxxxxxxxxxxxxxx -
- VitalStream, Inc.                       http://www.vitalstream.com -
-                                                                    -
-                 IGNORE that man behind the keyboard!               -
-                                                - The Wizard of OS  -
----------------------------------------------------------------------

--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux