On 7/17/13 6:46 PM, Josh Berkus wrote:
The key metric I'm looking at is that the bgwriter only took care of
0.3% of buffers.
There look to be a good number of buffers on this server that are only
being written at checkpoint time. The background writer will only deal
with buffers when their usage count is low. Fast servers can cycle over
shared_buffers such that as soon as their usage counts get low, they're
immediately reallocated by a hungry backend. You might try to quantify
how many buffers the BGW can possibly do something with using
pg_buffercache.
So we're halting because we hit lru_maxpages a *lot*, which is keeping
the bgwriter from keeping up. What gives?
2007's defaults can be a bummer in 2013. I don't hesitate to bump that
up to 500 on a server with decent hardware.
If it weren't for power savings concerns, I would set bgwriter_delay to
10ms by default. With that change lru_maxpages should drop to 10 to
have the same behavior as the existing default.
"It then sleeps for bgwriter_delay milliseconds, and repeats. When there
are no dirty buffers in the buffer pool, though, it goes into a longer
sleep regardless of bgwriter_delay."
That wording is from some power savings code added after I poked at
things. Look at src/backend/postmaster/bgwriter.c where it mentions
"electricity" to see more about it. The longer sleeps are supposed to
be interrupted when backends do work, to respond to bursts. Maybe it
can get confused, I haven't looked at it that carefully.
The latching mechanism shouldn't need to be tunable if it works
correctly, which is why there's no exposed knobs for it.
Since I saw an idea I'm going to steal from your pg_stat_bgwriter query,
I'll trade you one with a completely different spin on the data to
harvest from:
SELECT
block_size::numeric * buffers_alloc / (1024 * 1024 * seconds) AS
alloc_mbps,
block_size::numeric * buffers_checkpoint / (1024 * 1024 * seconds) AS
checkpoint_mbps,
block_size::numeric * buffers_clean / (1024 * 1024 * seconds) AS
clean_mbps,
block_size::numeric * buffers_backend/ (1024 * 1024 * seconds) AS
backend_mbps,
block_size::numeric * (buffers_checkpoint + buffers_clean +
buffers_backend) / (1024 * 1024 * seconds) AS write_mbps
FROM
(
SELECT now() AS sample,now() - stats_reset AS uptime,EXTRACT(EPOCH FROM
now()) - extract(EPOCH FROM stats_reset) AS seconds,
b.*,p.setting::integer AS block_size FROM pg_stat_bgwriter b,pg_settings
p WHERE p.name='block_size'
) bgw;
That only works on 9.1 and later where there is a stats_reset time
available on pg_stat_bgwriter. Sample from a busy system with
moderately tuned BGW and checkpoint_timeout at 15 minutes:
-[ RECORD 1 ]---+-------------------
alloc_mbps | 246.019686474412
checkpoint_mbps | 0.0621780475463596
clean_mbps | 2.38631188442859
backend_mbps | 0.777490109599045
write_mbps | 3.22598004157399
--
Greg Smith 2ndQuadrant US greg@xxxxxxxxxxxxxxx Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance