Hi,
We currently have a large table (9 million rows) of which only the last couple of days worth of data is queried on a regular basis.
To improve performance we are thinking of partitioning the table.
One idea is:
Current_data = last days worth
archive_data < today (goes back to 2005)
The idea being proposed at work is:
current_data = today's data
prior years data - be broken down into one table per day
archive_data - data older than a year.
My question is:
a) Does postgres suffer a performance hit say if there are 200 child tables.
b) What about aggregation between dates in the last year. eg total sales for firm a for the last year. It will need to look up n number of tables.
Any ideas, tips, gotchas in implementing partitioning would be welcome. It is a somewhat mission critical (not trading, so not as mission critical) system.
How expensive is maintaining so many partitions both in terms of my writing / maintaining scripts and performance.
Thanks in advance.
Radhika