Re: Advice on Storage Hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



David,
Check out wackamole.  It may be a good fit for your application
requirements.

If you have static content, it may be possible to use a "shared-nothing"
architecture in which each node contains an exact replica of the content
to be served.  The replicas can be maintained with something like rsync,
or the like.  This is likely to provide higher reliability (less
components like scsi bus, fibre channel hardware which is finicky,
cabling, etc to fail) and higher availability with wackamole (MTTR is
virtually zero because if a node fails, it is taken out of the server
pool).

One problem you may have if you use GFS with a scsi array is that the
array becomes a single point of failure.  If the array is not RAID,
individual disks become a single point of failure.  Even with RAID 1, it
is possible to suffer a catastrophic failure of two disks at the same
time and result in offline time and restoration from tape (if your
company does tape backups).  This failure scenario will annoy your
customers surely :)

The same problem applies to fibre channel, except now hubs/switches are
SPOFs in addition to the other components unless storage I/Os are
replicated using multipathing.  

With dual Ethernet, you add the possibility of replicating the network,
but bonding is really a poor fit to serve your availability
requirements.  What you really need is a redundant network protocol
(such as Totem Redundant Ring Protocol available from the openais
project sometime in the near future) merged with Wackamole's
functionality.

Good luck
-steve

On Fri, 2005-11-11 at 16:39 -0500, David Brieck Jr. wrote:
> On 11/11/05, Mark Hlawatschek <hlawatschek@xxxxxxx> wrote:
> > Hi,
> >
> > actually there is no problem to build a GFS Cluster with some kind of IP Based
> > Storage (gnbd). You can also build a Web server infrastructure ontop of a NFS
> > Server (or NAS Appliance). But it is also a good idea to use a GFS Cluster
> > (maybe diskless shared root) based on a FC Infraystructure if you need
> > performance, scalability and reliability. It is allways the question, what
> > you really need ...
> > Can you describe your requirements for the web services at a more detailed
> > level - Maybe I/O rate - scalability - reliability ?
> >
> > It's normally not the network bandwidth that is the problem, it is the
> > latency.
> >
> > Mark
> 
> Thanks Mark. We currently are on a two server setup, a dedicated
> database server and a combined web/email/dns server. We're really
> outgrowing our hardware much faster than anticipated. We've already
> upgraded each server to dual Xeons and 4/5 GB RAM each.
> 
> Our problem is this: we are growing exponentially and are finding it
> difficult to keep up with the increased needs on our webserver. Our
> database server is handling things just fine since our website(s) are
> as static as we can make them.
> 
> Our company has a 24 hr IT staff to handle the internal network, but
> there are only a few of us linux guys and we can't always be there
> when something happens. We need to be able to have a hardware or
> software failure and be able to keep going until we are able to either
> correct the problem or have a replacement installed and configured.
> 
> We'd like to be able to number one scale out the number of webserver
> to respond to increased demand relatively easily and number two stay
> online when a failure brings down a server. We also have plans to add
> streaming capabilities and similarly need to be able to respond to
> increased demand by scaling out.
> 
> When we put the cluster into place we plan to have three backend
> machines clustered and two web servers load balanced with the ability
> to add webservers at will. In addition we also plan on dual ethernet
> and channel bonding as well.
> 
> Thanks for an advice you can offer. I plan on attending some of the
> advanced RedHat classes on this subject to get a better grip on things
> before we start the project, but for now my only resource is the
> Internet and helpful people.
> 
> --
> 
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux