On 2013-09-09 20:38:09 -0400, Andrew Dunstan wrote: > > On 08/01/2013 03:20 PM, Jeff Janes wrote: > >On Thu, Aug 1, 2013 at 10:58 AM, Josh Berkus <josh@xxxxxxxxxxxx> wrote: > >>Amit, All: > >> > >>So we just retested this on 9.3b2. The performance is the same as 9.1 > >>and 9.2; that is, progressively worse as the test cycles go on, and > >>unacceptably slow compared to 8.4. > >> > >>Some issue introduced in 9.1 is causing BINDs to get progressively > >>slower as the PARSEs BINDs get run repeatedly. Per earlier on this > >>thread, that can bloat to 200X time required for a BIND, and it's > >>definitely PostgreSQL-side. > >> > >>I'm trying to produce a test case which doesn't involve the user's > >>application. However, hints on other things to analyze would be keen. > >Does it seem to be all CPU time (it is hard to imagine what else it > >would be, but...) > > > >Could you use oprofile or perf or gprof to get a profile of the > >backend during a run? That should quickly narrow it down to which C > >function has the problem. > > > >Did you test 9.0 as well? > > > This has been tested back to 9.0. What we have found is that the problem > disappears if the database has come in via dump/restore, but is present if > it is the result of pg_upgrade. There are some long-running transactions > also running alongside this - we are currently planning a test where those > are not present. We're also looking at constructing a self-contained test > case. > > Here is some perf output from the bad case: > > + 14.67% postgres [.] heap_hot_search_buffer > + 11.45% postgres [.] LWLockAcquire > + 8.39% postgres [.] LWLockRelease > + 6.60% postgres [.] _bt_checkkeys > + 6.39% postgres [.] PinBuffer > + 5.96% postgres [.] hash_search_with_hash_value > + 5.43% postgres [.] hash_any > + 5.14% postgres [.] UnpinBuffer > + 3.43% postgres [.] ReadBuffer_common > + 2.34% postgres [.] index_fetch_heap > + 2.04% postgres [.] heap_page_prune_opt > + 2.00% libc-2.15.so [.] 0x8041b > + 1.94% postgres [.] _bt_next > + 1.83% postgres [.] btgettuple > + 1.76% postgres [.] index_getnext_tid > + 1.70% postgres [.] LockBuffer > + 1.54% postgres [.] ReadBufferExtended > + 1.25% postgres [.] FunctionCall2Coll > + 1.14% postgres [.] HeapTupleSatisfiesNow > + 1.09% postgres [.] ReleaseAndReadBuffer > + 0.94% postgres [.] ResourceOwnerForgetBuffer > + 0.81% postgres [.] _bt_saveitem > + 0.80% postgres [.] _bt_readpage > + 0.79% [kernel.kallsyms] [k] 0xffffffff81170861 > + 0.64% postgres [.] CheckForSerializableConflictOut > + 0.60% postgres [.] ResourceOwnerEnlargeBuffers > + 0.59% postgres [.] BufTableLookup After a second look at this, I very tentatively guess that you'll see get_actual_variable_range() as the entry point here. Which would explain why you're seing this during PARSE. But there still is the question why we never actually seem to prune... Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance