On Wed, Apr 10, 2013 at 10:43 AM, Dale Betts <dale.betts@xxxxxxxxxx> wrote: > > I'd agree, certainly in my experiences. > > You need to ensure OS parameters such as the max open files (fs.file-max if we're talking > Linux) is set appropriately. Baring in mind each user will have an open file on each > underlying datafile for the databases they're connected to. Ok, to reply to myself and other interested parties, further insights are welcome! I've been digging today for information on this subject and: http://postgresql.1045698.n5.nabble.com/GENERAL-Maximum-number-of-tables-per-database-and-slowness-td1853836.html Basically saying: any number of tables is acceptable (well in range of couple of tens of thousands we might end up with), but have in mind pg_catalog might seriously grow with such number of tables. Huge pg_catalog tends to slow down query planner. This thread is from 2005., but I doubt those fact changed substionally with time. I'll probably do some testing, but I can hardly replicate production volume of data. Seems that due to predictable nature of the queries we are running against this system we'll go with multiple databases, each containing up to couple thousand tables at most. Still weighting pros and cons of each approach. -- Pozdrav/Greetings, Vedran Krivokuća Disclaimer: This message may contain information. -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin