On Mar 13, 2009, at 8:05 AM, Gregory Stark wrote:
"Jignesh K. Shah" <J.K.Shah@xxxxxxx> writes:
Scott Carey wrote:
On 3/12/09 11:37 AM, "Jignesh K. Shah" <J.K.Shah@xxxxxxx> wrote:
In general, I suggest that it is useful to run tests with a few
different
types of pacing. Zero delay pacing will not have realistic number of
connections, but will expose bottlenecks that are universal, and
less
controversial
I think I have done that before so I can do that again by running
the users at
0 think time which will represent a "Connection pool" which is
highly utilized"
and test how big the connection pool can be before the throughput
tanks.. This
can be useful for App Servers which sets up connections pools of
their own
talking with PostgreSQL.
Keep in mind when you do this that it's not interesting to test a
number of
connections much larger than the number of processors you have.
Once the
system reaches 100% cpu usage it would be a misconfigured
connection pooler
that kept more than that number of connections open.
How certain are you of that? I believe that assertion would only be
true if a backend could never block on *anything*, which simply isn't
the case. Of course in most systems you'll usually be blocking on IO,
but even in a ramdisk scenario there's other things you can end up
blocking on. That means having more threads than cores isn't
unreasonable.
If you want to see this in action in an easy to repeat test, try
compiling a complex system (such as FreeBSD) with different levels of
-j handed to make (of course you'll need to wait until everything is
in cache, and I'm assuming you have enough memory so that everything
would fit in cache).
--
Decibel!, aka Jim C. Nasby, Database Architect decibel@xxxxxxxxxxx
Give your computer some brain candy! www.distributed.net Team #1828
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance