> > 2) we can go with single instance of PostgreSQL service which would > then contain 30.000 database tables. > > So, consider this purely theoretical discussion - does anyone here have > experience of running PostgreSQL service with large number of database > tables (couple of thousands up to couple of tens of thousands)? Hello, We are handling data for several customers, each of them having its own schema. This allows us to easily bulk move data on other servers when place gets rare. These are comparable to data warehouse data and are used by a reporting application. A majority of the tables are seldom or never used which may moderate possible issues with very large pg_catalog. We are overall impressed how well Postgres handle this :-) On the other hand we've spent a *LOT* of work in table and query design and we do have some problems with the parser performance: We do log all durations greater than 1 seconds. 1 % of these are "parse" statements about 90 % of these long parse statements are below 10 seconds. Here the figure of our server hosting the most tables. I expect them to be somewhat above the maximum that we should allow. DB size :3500 GB pg_catalog size: 2GB select count(*) from pg_tables; 120'884 select count(*) from pg_indexes; 219'082 select count(*) from pg_attribute; 2'779'199 Server: 48 GB RAM & 12 cores regards, Marc Mamin -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin