On Tue, Nov 20, 2012 at 12:16 PM, Jeff Janes <jeff.janes@xxxxxxxxx> wrote: > On Tue, Nov 20, 2012 at 9:05 AM, Merlin Moncure <mmoncure@xxxxxxxxx> wrote: >> On Tue, Nov 20, 2012 at 10:50 AM, Jeff Janes <jeff.janes@xxxxxxxxx> wrote: >>> >>> I wouldn't expect so. Increasing shared_buffers should either fix >>> free list lock contention, or leave it unchanged, not make it worse. >> >> AIUI, that is simply not true (unless you raise it to the point you're >> not churning them). I'm looking at StrategyGetBuffer() for non-scan >> cases. It locks "BufFreelistLock" then loops the free list, and, if >> it finds nothing, engages a clock sweep. > > The freelist should never loop. It is written as a loop, but I think > there is currently no code path which ends up with valid buffers being > on the freelist, so that loop will never, or at least rarely, execute > more than once. > >> Both of those operations are >> dependent on the number of buffers being managed and so it's >> reasonable to expect some workloads to increase contention with more >> buffers. > > The clock sweep can depend on the number of buffers begin managed in a > worst-case sense, but I've never seen any evidence (nor analysis) that > this worst case can be obtained in reality on an ongoing basis. By > constructing two pathological workloads which get switched between, I > can get the worst-case to happen, but when it does happen the > consequences are mild compared to the amount of time needed to set up > the necessary transition. In other words, the worse-case can't be > triggered often enough to make a meaningful difference. Yeah, good points; but (getting off topic here) : there have been several documented cases of lowering shared buffers improving performance under contention...the 'worst case' might be happening more than expected. In particular, what happens when a substantial percentage of the buffer pool is set with a non-zero usage count? This seems unlikely, but possible? Take note: if (buf->refcount == 0) { if (buf->usage_count > 0) { buf->usage_count--; trycounter = NBuffers; /* emphasis *./ } ISTM time spent here isn't bounded except that as more time is spent sweeping (more backends are thus waiting and not marking pages) the usage counts decrease faster until you hit steady state. Smaller buffer pool naturally would help in that scenario as your usage counts would drop faster. It strikes me as cavalier to be resetting trycounter while sitting under the #1 known contention point for read only workloads. Shouldn't SBF() work on advisory basis and try to force a buffer after N failed usage count attempts? merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general