On 28.02.2011 11:38, Marc Cousin wrote:
I've been facing a very large (more than 15 seconds) planning time in a partitioned configuration. The amount of partitions wasn't completely crazy, around 500, not in the thousands. The problem was that there were nearly 1000 columns in the parent table (very special use case, there is a reason for this application for having these many columns). The check constraint was extremely simple (for each child, 1 column = 1 constant, always the same column). As I was surprised by this very large planning time, I have been trying to study the variation of planning time against several parameters: - number of columns - number of children tables - constraint exclusion's value (partition or off) What (I think) I measured is that the planning time seems to be O(n^2) for the number of columns, and O(n^2) for the number of children tables. Constraint exclusion had a limited impact on planning time (it added between 20% and 100% planning time when there were many columns).
Testing here with a table with 1000 columns and 100 partitions, about 80% of the planning time is looking up the statistics on attribute width, to calculate average tuple width. I don't see O(n^2) behavior, though, it seems linear.
I'd like to know if this is a known behavior ? And if I could mitigate it somehow ?
I'm out of ideas on how to make it faster, I'm afraid. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance