Tom Lane wrote:
You are making a common beginner error, which is to suppose that N
little tables are better than one big one. They are not.
Well that depends on how you define better. For my purposes, it is better.
What you're
effectively doing is replacing the upper levels of a big table's indexes
with lookups in the system catalogs, which in point of fact is a
terrible tradeoff from a performance standpoint.
Only if you assume I use all data in all tables all the time. But as I
have explained in other replies recently, most of the times only data
from the newest child table is used.
I did the performance tests before deciding on the design and having it
all in one large table would not perform at all within requirements, The
reason was that the indexes for the ever growing table would take longer
and longer to update at each insert.
When I use partitions, or child tables, I can use COPY to insert the
data into the new chilkd table and then add the indexes to the single
table only. That was, by far, the fastets solution.
From a database-theory standpoint, if all this data is alike then you
should have it all in one big table.
Then, what is the point with partitions, if you can not use it to
somehow separate logically similar data into different paritions because
one has a need to do so? Of course I could have put it in a single
table, had it not been for the performance. I could have used a discrete
timestamp to separate the data, but why? partitions is more practical.
There are certain practical cases
where it's worth partitioning, but not at the level of granularity that
you are proposing.
If its practical to use partitions, granularity does not come into the
equation.
regards
thomas
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq