On 09/14/2011 02:56 AM, Carlo Stonebanks wrote:
Even for 300 stateful applications that can remain connected for up to a
week, continuously distilling data (imports)?
If they're all doing active work all that time you can still benefit
from a pooler.
Say your server can service 50 connections at optimum speed, and any
more result in reduced overall throughput. You have 300 apps with
statements they want to run. Your pooler will basically queue them, so
at any one time 50 are doing work and 250 are waiting for database
access. This should _improve_ database throughput by reducing contention
if 50 worker connections is your sweet spot. However, it will also
increase latency for service for those workers because they may have to
wait a while before their transaction runs, even though their
transaction will complete much faster.
You'd probably want to pool at the transaction level, so once a client
gets a connection it keeps it for the lifetime of that transaction and
the connection is handed back to the pool when the transaction commits
or rolls back.
you want the controller configured for write-back (with automatic
switch to write-through on low or failed battery, if possible).
For performance or safety reasons? Since the sys admin thinks there's no
performance benefit from this, I would like to be clear on why we should
do this.
fsync!
If your workload is read-only, it won't help you much. If your workload
is write-heavy or fairly balanced it'll make a HUGE difference, because
fsync() on commit won't have to wait for disk I/O, only I/O to the RAID
card's cache controller.
You can also play with commit_delay and synchronous_commit to trade
guarantees of data persistence off against performance. Don't mind
losing up to 5 mins of commits if you lose power? These options are for you.
Whatever you do, do NOT set fsync=off. It should be called "Eat my data
if anything goes even slightly wrong=on"; it does have legitimate uses,
but they're not yours.
> Can our particular setup benefit from changing the bgwriter values?
Probably not. If you find that your interactive users have periods
where queries seem to "freeze" for a few minutes at a time and then
return to normal levels of performance, you might need to make this
more aggressive.
We actually experience this. Once again, remember the overwhelming use
of the system is long-running import threads with continuous
connections. Every now and then the imports behave as if they are
suddenly taking a deep breath, slowing down. Sometimes, so much we
cancel the import and restart (the imports pick up where they left off).
This could definitely be checkpointing issues. Enable checkpoint logging.
What would the bg_writer settings be in this case?
You need to tune it for your workload I'm afraid. See the manual and
mailing list discussions.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance