At 01:49 PM 12/13/2006, Bucky Jordan wrote:
I've only seen pg_bench numbers > 2,000 tps on either really large
hardware (none of the above mentioned comes close) or the results
are in memory due to a small database size (aka measuring update contention).
Which makes a laptop achieving such numbers all the more interesting IMHO.
Just a guess, but these tests (compiler opts.) seem like they
sometimes show a benefit where the database is mostly in RAM (which
I'd guess many people have) since that would cause more work to be
put on the CPU/Memory subsystems.
The cases where the working set, or the performance critical part of
the working set, of the DB is RAM resident are very important ones ITRW.
Other people on the list hinted at this, but I share their
hypothesis that once you get IO involved as a bottleneck (which is a
more typical DB situation) you won't notice compiler options.
Certainly makes intuitive sense. OTOH, this list has seen discussion
of what should be IO bound operations being CPU bound. Evidently due
to the expense of processing pg datastructures. Only objective
benches are going to tell us where the various limitations on pg
performance really are.
I've got a 2 socket x 2 core woodcrest poweredge 2950 with a BBC 6
disk RAID I'll run some tests on as soon as I get a chance.
I'm also thinking for this test, there's no need to tweak the
default config other than maybe checkpoint_segments, since I don't
really want postgres using large amounts of RAM (all that does is
require me to build a larger test DB).
Daniel's orginal system had 512MB RAM. This suggests to me that
tests involving 256MB of pg memory should be plenty big enough.
Thoughts?
Hope they are useful.
Ron Peacetree