Search Postgresql Archives

Re: High checkpoint_segments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Wed, Feb 15, 2012 at 1:35 AM, Jay Levitt <jay.levitt@xxxxxxxxx> wrote:
We need to do a few bulk updates as Rails migrations.  We're a typical read-mostly web site, so at the moment, our checkpoint settings and WAL are all default (3 segments, 5 min, 16MB), and updating a million rows takes 10 minutes due to all the checkpointing.

We have no replication or hot standbys.  As a consumer-web startup, with no SLA, and not a huge database, and if we ever do have to recover from downtime it's ok if it takes longer.. is there a reason NOT to always run with something like checkpoint_segments = 1000, as long as I leave the timeout at 5m?

Still checkpoints keep occurring every 5 mins. Anyways checkpoint_segments=1000 is huge, this implies you are talking about 
16MB * 1000 = 16000MB worth pg_xlog data, which is not advisable from I/O perspective and data loss perspective. Even in the most unimaginable case if all of these 1000 files get filled up in less than 5 mins, there are chances that system will slow down due to high IO and CPU.

You may think of increasing checkpoint_timeout as well, but, some monitoring and analysis is needed to arrive at a number.

What does pg_stat_bgwriter say about checkpoints ?
Do you have log_checkpoints enabled ?

Thanks
VB


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux