On 21 Jul 2005, at 17:02, Scott Marlowe wrote:
On Thu, 2005-07-21 at 02:43, vinita bansal wrote:
Hi,
My application is database intensive. I am using 4 processes since
I have 4
processeors on my box. There are times when all the 4 processes
write to the
database at the same time and times when all of them will read all
at once.
The database is definitely not read only. Out of the entire
database, there
are a few tables which are accessed most of the times and they are
the ones
which seem to be the bottleneck. I am trying to get as much
performance
improvement as possible by putting some of these tables in RAM so
that they
dont have to be read to/written from hard disk as they will be
directly
available in RAM. Here's where slony comes into picture, since
we'll have to
mainatin a copy of the database somewhere before running our
application
(everything in RAM will be lost if there's a power failure or
anything else
goes wrong).
My concern is how good Slony is?
How much time does it take to replicate database? If the time
taken to
replicate is much more then the perf. improvement we are getting
by putting
tables in memory, then there's no point in going in for such a
solution. Do
I have an alternative?
My feeling is that you may be going about this the wrong way. Most
likely the issue so far has been I/O contention. Have you tested your
application using a fast, battery backed caching RAID controller on
top
of, say, a 10 disk RAID 1+0 array? Or even RAID 0 with another
machine
as the slony slave?
Isn't that slightly cost prohibitive? Even basic memory has
enormously fast access/throughput these days, and for a fraction of
the price.
Slony, by the way, is quite capable, but using a RAMFS master and a
Disk
drive based slave is kind of a recipe for disaster in ANY replication
system under heavy load, since it is quite possible that the master
could get very far ahead of the slave, since Slony is asynchronous
replication. At some point you could have more data waiting to be
replicated than your ramfs can hold and have some problems.
If a built in RAID controller with battery backed caching isn't
enough,
you might want to look at a large, external storage array then. many
hosting centers offer these as a standard part of their package, so
rather than buying one, you might want to just rent one, so to speak.
Again with the *money* RAM = Cheap. Disks = Expensive. At least when
you look at speed/$. Your right about replicating to disk and to ram
though, that is pretty likely to result in horrible problems if you
don't keep load down. For some workloads though, I can see it
working. As long as the total amount of data doesn't get larger than
your RAMFS it could probably survive.
---------------------------(end of
broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?
http://archives.postgresql.org