On Mon, May 08, 2017 at 10:12:18AM -0700, Paul A Jungwirth wrote: > I'm working on a problem where partitioning seems to be the right > approach, but we would need a lot of partitions (say 10k or 100k). > Everywhere I read that after ~100 child tables you experience > problems. I have a few questions about that: We use partitioning, previously one child per month (with history of 1-6 years); I tried using one child per day, and caused issues. For us, planning time is pretty unimportant (~1sec would be acceptable 99% of the time) but I recall seeing even more than that. I changed to using daily granularity for only our largest tables, which seems to be working fine for the last ~9months. So the issue isn't just "number of children" but "total number of tables". I believe the problem may have been due to large pg_statistic/pg_attribute and similar tables taking more than a few 100MBs, and potentially no longer fitting in buffer cache. > 3. Is it true that query planning time should scale linearly as I add > more child tables? I believe it's understood to be super-linear: https://www.postgresql.org/message-id/26761.1483989025%40sss.pgh.pa.us https://www.postgresql.org/message-id/4188.1298960419%40sss.pgh.pa.us Justin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general