Re: WebFarm using RedHat cluster suite ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you again for the input,
We will use FC 2TB SAN (after checking budget) with 10K FC HD.
Our Postgresql cluster will be active/passive (2 node connected to RAID 0+1 on one SAN group). Servers inside the webfarm will be connected to RAID 5 on another SAN group).



Frédéric Médery
Administrateur Système
LexUM, Université de Montréal

mederyf@xxxxxxxxxxxxxxxxxx
tel. : (514) 343-6111  #3288



Marc Grimme wrote:

On Thursday 19 January 2006 17:43, FM wrote:
First, Thank you all for the great input !
Here is more output about our sites :
Some static HTML web site. Sites are generated in Lan and then rsync to
DMZ. And other with mod_perl + Postgresql Database (local)
Don't use postgres clustered(parallel with multiple writer nodes, HA should not be a problem) on gfs. That is not supposed to work. As far as I know. You would need a dbms that supports parallel clustering itself. Like i.e. Oracle 9iRAC.
One of our most visited site (static html) is using nealy 200 GB of
bandwidth / month
and have 545368 hists a day.
For decembre 2005 stats  :
http://stats.lexum.umontreal.ca/awstats.pl?month=12&year=2005&output=main&c
onfig=www.canlii.org&lang=en&framename=index

Now we are using dual Xeon (2 Ghz), with 2,5 GB of RAM. with RAID 5 SCSI
10KRPM
Network is GB
Budget is 200K CA$
Ok. You are having 500000 hit/day. One question would be how many I/Os does a hit issue. The other question is, is that the upper bound or do you expect it to grow, so that the infrastructure will have to grow in parallel. You should really be picky about the storage-infrastructure (includes storage system and locking network). Separate storage and locking network. Under normal circumstances 500000hits/day should not be a problem. But if you are right now using SCSI Disks stay with SCSI or FC-Disks they are way much faster and relyable then SATA. The more and faster Disks you use the more I/Os you will get. Besides for 200k CA$ you should also get a FC-Infrastructure and Storage. Also think about consolidating all data (even the os, share the root fs of the servers) on to the storage system. It saves you loads of management (scales within small amount of time) and disks in the servers. And you can easily replace a faulty server by pulling it out and powering a new one back on. We have made quite good experiences at webfarms with that concept. Check back a http://www.open-sharedroot.org/. We will have a howto for building up such a sharedroot within the next week.

Hope that helps
Regards Marc.
Thanks again

Eric Anderson wrote:
Marc Grimme wrote:
Hello,
I think the best way to tell what storage or infrastructure would be
the best is to know more about your current setup and what issues
with that you want to get ahead of.
For example: if you really think about using iscsi, I don't think
that SCSI or SATA drives make a big difference - depending on how
many drives you use. But if all webservers currently have locally
attached disk drives you want scale too linar with exchanging and
IDE/parallel-SCSI Bus with an network topology using Ethernet. But my
opinion is: if you have a lot of I/Os make yourself mostly
independent from the latency of an ethernet and rethink about using
Fibre-Channel with GFS.
Honestly, ethernet latencies (especially on gigabit ethernet) are
lower than fiber channel latencies, so this statement doesn't really
hold up.
If you want very fast speeds, get an iSCSI array, populate it with 15k
RPM scsi disks with big caches, max the cache out on the array, and
set it up for a RAID0+1 (or RAID10 depending on the implementor).  If
you want fast speed, but not a big price, you can notch down to 10K
scsi disks, or use 15k scsi disks and a raid 5, etc, and keep notching
down until it fits your budget and needs.
I agree here though that we really need to know a few things:

- what kind of traffic is this?
- size of the files most commonly used
- total data size (how much space you need)
- budget
- demands (availability/performance/etc)

Eric

But the best advices could be made if you make your current setup and
the things you want to achieve more clearly.

Regards Marc.

On Wednesday 18 January 2006 22:01, FM wrote:
Thanks for the reply,
I read about SATA storage but we sync from lan to dmz, so there is lots
of r/w.

Michael Will wrote:
I am surprised you use SCSI drives on the storage if
you are price sensitive, usually SATA is the better
bang for the buck unless you are doing databases with
lots of small read and writes.

Michael

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of FM
Sent: Wednesday, January 18, 2006 12:34 PM
To: Redhat Cluster
Subject:  WebFarm using RedHat cluster suite ?

Hello everybody,

Is redhat cluster suite (RHEL 4 ) a good candidate for a webfarm ?
My setup would be : several servers (1U AMd dual core) connected to
iscsi storage array.

Is Iscsi a good choice (SAN prices are too high for us) for hardware ?
Our network is GB.
We will have SCSI 10KRPM + read and write cache on the SCSI card +
RAID5
thanks !










--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux