> we are currently running a postgres server (upgraded to 8.1) which has > one large database with approx. 15,000 tables. Unfortunately performance > suffers from that, because the internal tables (especially that which > holds the attribute info) get too large. > > (We NEED that many tables, please don't recommend to reduce them) > > Logically these tables could be grouped into 500 databases. My question > is: > > Would performance be better if I had 500 databases (on one postgres > server instance) which each contain 30 tables, or is it better to have > one large database with 15,000 tables? In the old days of postgres 6.5 > we tried that, but performance was horrible with many databases ... > > BTW: I searched the mailing list, but found nothing on the subject - and > there also isn't any information in the documentation about the effects > of the number of databases, tables or attributes on the performance. > > Now, what do you say? Thanks in advance for any comment! I've never run near that many databases on one box so I can't comment on the performance. But let's assume for the moment pg runs fine with 500 databases. The most important advantage of multi-schema approach is cross schema querying. I think as you are defining your problem this is a better way to do things. Merlin