Re: [repost] Help me develop new commit_delay advice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6 September 2012 04:20, Greg Smith <greg@xxxxxxxxxxxxxxx> wrote:
> On 08/02/2012 02:02 PM, Peter Geoghegan wrote:
> I dug up what I wrote when trying to provide better advice for this circa
> V8.3.  That never really gelled into something worth publishing at the time.
> But I see some similar patterns what what you're reporting, so maybe this
> will be useful input to you now.  That included a 7200RPM drive and a system
> with a BBWC.

So, did either Josh or Greg ever get as far as producing numbers for
drives with faster fsyncs than the ~8,000 us fsync speed of my
laptop's disk?

I'd really like to be able to make a firm recommendation as to how
commit_delay should be set, and have been having a hard time beating
the half raw-sync time recommendation, even with a relatively narrow
benchmark (that is, the alernative pgbench-tools scripts). My
observation is that it is generally better to ameliorate the risk of
increased latency through a higher commit_siblings setting rather than
through a lower commit_delay (though it would be easy to overdo it -
commit_delay can now be thought of as a way to bring the benefits of
group commit to workloads that could in principle benefit, but would
otherwise not benefit much from it, such as workloads with lots of
small writes but not too many clients).

One idea I had, which is really more -hackers material, was to test if
backends with a transaction are inCommit (that's a PGXACT field),
rather than just having a transaction, within MinimumActiveBackends().
The idea is that commit_siblings would represent the number of
backends imminently committing needed to delay, rather than the number
of backends in a transaction. It is far from clear that that's a good
idea, but that's perhaps just because the pgbench infrastructure is a
poor proxy for real workloads, with variable sized transactions.
Pretty much all pgbench transactions commit imminently anyway.

Another idea which I have lost faith in - because it has been hard to
prove that client count is really relevant - was the notion that
commit_delay should be a dynamically adapting function of the client
(with transactions) count. Setting commit_delay to 1/2 raw sync time
appears optimal at any client count that is > 1. The effect at 2
clients is quite noticeable.

I have a rather busy schedule right now, and cannot spend too many
more cycles on this. I'd like to reach a consensus on this soon. Just
giving the 1/2 raw sync time the official blessing of being included
in the docs should be the least we do, though. It is okay if the
wording is a bit equivocal - that has to be better than the current
advice, which is (to paraphrase) "we don't really have a clue; you
tell us".

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux