Hello all, We are exploring possible strategies on deploying PostgreSQL with application which will store fairly much data (current implementation stores around 345 GB of data, and it will be subject of up to 10 times more data stored in the database). Now, not to go into too much details to avoid bothering you, the key fact is that expected data sets needed at any moment for manipulation at any given point in time (let's call them operational data sets) will be around 1/10.000th of whole data set. There can be up to 100 concurrent different data sets operations at the same time. Operational data sets are strictly defined in terms of what they contain, but they could contain any piece of complete data set, so we have 2 obvious strategies here to go with: 1) we can go with different instances of PostgreSQL service, let's say (for pure theory) 10 of them on the same HA cluster setup. Every instance would hold let's say 1/10th of that big recordset, and around 3.000 database tables (this apparently shouldn't be of any problem to PostgreSQL, see below). 2) we can go with single instance of PostgreSQL service which would then contain 30.000 database tables. So, consider this purely theoretical discussion - does anyone here have experience of running PostgreSQL service with large number of database tables (couple of thousands up to couple of tens of thousands)? This entry in Wiki indicates PostgreSQL in general doesn't have problem with "thousands of tables", how about tens of thousands? http://postgres.cz/wiki/PostgreSQL_SQL_Tricks#Faster_table.27s_list Best regards, V. -- Pozdrav/Greetings, Vedran Krivokuća Disclaimer: This message may contain information. -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin