"Thomas Finneid" <tfinneid@xxxxxxxxxxxxxxxxxxxxx> writes: >> What you're >> effectively doing is replacing the upper levels of a big table's indexes >> with lookups in the system catalogs, which in point of fact is a >> terrible tradeoff from a performance standpoint. > > Only if you assume I use all data in all tables all the time. But as I have > explained in other replies recently, most of the times only data from the > newest child table is used. Tom's point is that if you have 55k tables then just *finding* the newest child table is fairly expensive. You're accessing a not insignificant-sized index and table of tables. And the situation is worse when you consider the number of columns all those tables have, all the indexes those tables have, all the column keys those indexes the tables have have, etc. Nonetheless you've more or less convinced me that you're not completely nuts. I would suggest not bothering with inheritance though. Inheritance imposes additional costs to track the inheritance relationships. For your purposes you may as well just create separate tables and not bother trying to use inheritance. > If its practical to use partitions, granularity does not come into the > equation. Uhm, yes it does. This is engineering, it's all about trade-offs. Having 55k tables will have costs and benefits. I think it's a bit early to dismiss the costs. Keep in mind that profiling them may be a bit tricky since they occur during planning and DDL that you haven't finished experimenting with yet. The problem you just ran into is just an example of the kind of costs it imposes. You should also consider some form of compromise with separate tables but at a lower level of granularity. Perhaps one partition per day instead of one per 30s. you could drop a partition when all the keys in it are marked as dead. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend