Jean-David Beyer wrote:
Chris wrote (in part):
I didn't have logging set up before but it's up and running now and I
was getting
LOG: checkpoints are occurring too frequently (26 seconds apart)
HINT: Consider increasing the configuration parameter
"checkpoint_segments".
So I increased that from 10 to 30 and it finished:
UPDATE 3500101
Time: 146513.349 ms
I have not used postgreSQL since I tried it once in about 1998 (when I found
it unsatisfactory, but much has changed since then), but I am going to try
it again. What would be a good checkpointing interval? I would guess 26
seconds is too often. What considerations go into picking a checkpointing
interval?
Basically, it depends on the amount of updates you have and whether you
want to minimise total writes or keep the load even. Lots of
checkpointing means you'll do more writing, but in smaller chunks. The
only way to find out the right value for you is to test on a realistic
setup I'm afraid.
I note, from the book "PostgreSQL" second edition by Douglas and Doublas,
the following parameters are available:
WAL_BUFFERS The default is 8.
CHECKPOINT_SEGMENTS The default is 3. This would have been too low for the
O.P. Would it make sense to start with a higher value
or is this a good value and just not appropriate for
the O.P.? Should CHECKPOINT_SEGMENTS be raised until
the checkpointing is about half CHECKPOINT_TIMEOUT,
e.g., 150 seconds while the dbms is running typical
work?
CHECKPOINT_TIMEOUT The default is 300 seconds.
CHECKPOINT_WARNING The default is 30 seconds.
If your updates are large (rather than having lots of small ones) then
increasing wal_buffers might be useful.
If you have a lot of updates, you'll want to increase
checkpoint_segments at least. You'll see mention in the logs when PG
thinks checkpoints are too close together (checkpoint_timeout/warning).
Of course, a lot of people will have PostgreSQL installed on a PC or
laptop along with the rest of the Linux distro. They'll not want to
allocate too many resources.
My machine has 8 GBytes RAM and it worked perfectly well (very very little
paging) when it had 4 GBytes RAM. I doubled it because it was cheap at the
time and I was afraid it would become unavailable later. It is usually
between 2/3 and 3/4 used by the cache. When I run IBM DB2 on it, the choke
point is the IO time spent writing the logfiles.
If DB2 was I/O saturated with its transaction log, I'd be surprised if
PG isn't too.
--
Richard Huxton
Archonet Ltd