Samuel Gendler wrote > On Thu, Nov 8, 2012 at 1:36 AM, Denis < > socsam@ > > wrote: > >> >> P.S. >> Not to start a holywar, but FYI: in a similar project where we used MySQL >> now we have about 6000 DBs and everything works like a charm. >> > > You seem to have answered your own question here. If my recollection of a > previous discussion about many schemas and pg_dump performance is > accurate, > I suspect you are going to be told that you've got a data architecture > that > is fairly incompatible with postgresql's architecture and you've > specifically ruled out a solution that would play to postgresql's > strengths. Ok guys, it was not my intention to hurt anyone's feelings by mentioning MySQL. Sorry about that. There simply was a project with a similar architecture built using MySQL. When we started the current project, I have made a decision to give PostgreSQL a try. Now I see that the same architecture is not applicable if PostgreSQL is used. I would recommend you to refresh the info here http://wiki.postgresql.org/wiki/FAQ. There is a question "What is the maximum size for a row, a table, and a database?". Please add there info on maximum DBs number and tables number one DB can contain while PostgreSQL continues to work properly. PS: the easiest solution in my case is to create initially 500 DBs (like app_template_[0-500]) and create up to 500 schemas in each of it. This will make 250000 possible clients in total. This should be enough. The question is: can you see the possible pitfalls of this solution? -- View this message in context: http://postgresql.1045698.n5.nabble.com/Thousands-databases-or-schemas-tp5731189p5731203.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance