On Thu, Jan 7, 2010 at 11:14 PM, Carlo Stonebanks <stonec.register@xxxxxxxxxxxx> wrote: >> It might well be checkpoints. Have you tried cranking up checkpoint >> segments to something like 100 or more and seeing how it behaves then? > > No I haven't, althugh it certainly make sense - watching the process run, > you get this sense that the system occaisionally pauses to take a deep, long > breath before returning to work frantically ;D > > Checkpoint_segments are currently set to 64. The DB is large and is on a > constant state of receiving single-row updates as multiple ETL and > refinement processes run continuously. > > Would you expect going to 100 or more to make an appreciable difference, or > should I be more aggressive? If you're already at 64 then not much. Probably wouldn't hurt to crank it up more and delay the checkpoints as much as possible during these updates. 64 segments is already 1024M. If you're changing a lot more data than that in a single update / insert then cranking them up more might help. What you might need to do is to change your completion target to something closer to 100% since it's likely that most of the updates / inserts are not happening to the same rows over and over, but to different rows for each one, the closer you can get to 100% completed before the next checkpoint the better. This will cause some more IO to happen, but will even it out more (hopefully) so that you don't get checkpoint spikes. Look into the checkpoint logging options so you can monitor how they're affecting system performance. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance