I wrote: > Marc Cousin <cousinmarc@xxxxxxxxx> writes: >> Yes, for the same test case, with a bit of data in every partition and >> statistics up to date, planning time goes from 20 seconds to 125ms for the 600 >> children/1000 columns case. Which is of course more than acceptable. > [ scratches head ... ] Actually, I was expecting the runtime to go up > not down. Maybe there's something else strange going on here. Oh, doh: the failing pg_statistic lookups are all coming from the part of estimate_rel_size() where it tries to induce a reasonable tuple width estimate for an empty table (see get_rel_data_width). Probably not a case we need to get really tense about. Of course, you could also argue that this code is stupid because it's very unlikely that there will be any pg_statistic entries either. Maybe we should just have it go directly to the datatype-based estimate instead of making a boatload of useless pg_statistic probes. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance