Am 23.11.2006 um 23:37 schrieb Gopal:
hared_buffers = 20000 # min 16 or
max_connections*2, 8KB each
If this is not a copy & paste error, you should add the "s" at the
beginning of the line.
Also you might want to set this to a higher number. You are setting
about 20000 * 8k = 160MB, this number might be a bit too small if you
do a lot of queries spread over the whole dataset. I don't know
whether the memory management on Windows handles this well, but you
can give it a try.
effective_cache_size = 82728 # typically 8KB each
Hmm. I don't know what the real effect of this might be as the doc
states:
"This parameter has no effect on the size of shared memory allocated
by PostgreSQL, nor does it reserve kernel disk cache; it is used only
for estimation purposes."
You should try optimizing your shared_buffers to cache more of the data.
But postgres has everything spread across 10-15 processes, with
each process using about 10-30MB, not nearly enough to cache all
the data and ends up doing a lot of disk reads.
It's not soo easy. PostgreSQL maintains a shared_buffer which is
accessible by all processes for reading. On a Unix system you can see
this in the output of top - don't know how this works on Windows.
In any case I cannot believe that having 15-20 processes running on
windows helps. Why not spwan of threads instead of processes, which
migh be far less expensive and more efficient. Is there any way of
doing this?
Because it brings you a whole lot of other problems? And because
PostgreSQL is not "made for Windows". PostgreSQL runs very good on
Linux, BSD, Mac OS X and others. The Windows version is quite young.
But before you blame stuff on PostgreSQL you should give more
information about the query itself.
My question is, should I just accept the performance I am getting
as the limit on windows or should I be looking at some other params
that I might have missed?
Post the "explain analyse select <your query here>" output here. That
might help to understand, why you get such a high CPU load.
cug