Yelai, Ramkumar wrote: >> Anything exceeding a few hundred partitions is not considered a good idea. >> The system needs to keep track of all the tables, and query planning for such a partitioned table >> might be expensive. >> >> 1440 is probably pushing the limits, but maybe somebody with more experience can say more. > By mistake I added 1440 tables, but it is incorrect, below is the total number of tables > > 7 base tables X 120 months = 840 child tables. > > As per your statement, If I create these many table then it will affect the performance. But as per > the document (http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html) > constraint_exclusion will improve query performance. Please clarify me here how query planning will be > expensive? The planner will have to decide which of the 840 tables to access. > I have one more strategy that instead of creating 10 years, I'd like to use batch processing like > create 2 years of tables ( 240 tables ) and when we are going above 2 years we will create next 2 year > table and update the trigger or use 5 years (480 tables ) instead of 2 years. > > The above approach will not create a more partitioned table and if user wants space they can truncate > the old tables. Please let me know is this good approach? I don't understand that in detail. I would recommend that you prototype some of these variants and run some performance tests. That's the only good way to know what will perform well in your environment. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general