Tom Lane wrote:
Scott Carey <scott@xxxxxxxxxxxxxxxxx> writes:
If there is enough lock contention and a common lock case is a short lived shared lock, it makes perfect sense sense. Fewer readers are blocked waiting on writers at any given time. Readers can 'cut' in line ahead of writers within a certain scope (only up to the number waiting at the time a shared lock is at the head of the queue). Essentially this clumps up shared and exclusive locks into larger streaks, and allows for higher shared lock throughput.
Exclusive locks may be delayed, but will NOT be starved, since on the next iteration, a streak of exclusive locks will occur first in the list and they will all process before any more shared locks can go.
That's a lot of sunny assertions without any shred of evidence behind
them...
The current LWLock behavior was arrived at over multiple iterations and
is not lightly to be toyed with IMHO. Especially not on the basis of
one benchmark that does not reflect mainstream environments.
Note that I'm not saying "no". I'm saying that I want a lot more
evidence *before* we go to the trouble of making this configurable
and asking users to test it.
regards, tom lane
Fair enough.. Well I am now appealing to all who has a fairly decent
sized hardware want to try it out and see whether there are "gains",
"no-changes" or "regressions" based on your workload. Also it will help
if you report number of cpus when you respond back to help collect
feedback.
Regards,
Jignesh
--
Jignesh Shah http://blogs.sun.com/jkshah
The New Sun Microsystems,Inc http://sun.com/postgresql
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance