On the boxes
in question the settings are: shared_buffers
= 1000 work_mem =
1024 I have
revised these on my DEV box and see some improvement (a quick thank you to Jim
Nasby for his assistance with that): shared_buffers
= 20000 work_mem =
8024 Regards, Tim -----Original
Message----- For a standard config
most of the memory used by Postgres is the shared buffers. The shared
buffers are a cache to store blocks read from the disk, so if you do a query,
Postgres will allocate and fill the shared buffers up to the max amount you set
in your postgresql.conf file. Postgres doesn't release that
memory between queries because the point is to be able to pull data from
ram instead of the disk on the next query. Are you sure your
settings in postgresql.conf are standard? What are your settings for
shared_buffers and work_mem? -----Original
Message----- Are you
saying the kernel's disc cache may be getting whacked? No, I understand
that PG should use as much memory as it can and the system as well. The
main problem here is that with almost all the 8GB of RAM 'in use' when I try to
do a pg_dump or vacuumdb I run out of memory and the system crashes.... I well
understand that unused memory is not a good thing, just that when you have none
and can't do the maint work....bad stuff happens. For example, I just
created a benchdb on my DEV box with 1,000,000 tuples. As this ran the
mem in use jumped up 1G and it hasn't gone down? Once the PG process has
finished its task shouldn't it release the memory it used? Thanks, -----Original
Message----- "mcelroy,
tim" <tim.mcelroy@xxxxxxxxxxxxxxx> writes: Probably
kernel disk cache. Are you under the misimpression that unused
regards, tom lane |