Tom Lane-2 wrote > Denis < > socsam@ > > writes: >> Tom Lane-2 wrote >>> Hmmm ... so the problem here isn't that you've got 2600 schemas, it's >>> that you've got 183924 tables. That's going to take some time no matter >>> what. > >> I wonder why pg_dump has to have deal with all these 183924 tables, if I >> specified to dump only one scheme: "pg_dump -n schema_name" or even like >> this to dump just one table "pg_dump -t 'schema_name.comments' " ? > > It has to know about all the tables even if it's not going to dump them > all, for purposes such as dependency analysis. > >> We have a web application where we create a schema with a number of >> tables >> in it for each customer. This architecture was chosen to ease the process >> of >> backup/restoring data. > > I find that argument fairly dubious, but in any case you should not > imagine that hundreds of thousands of tables are going to be cost-free. > > regards, tom lane > > > -- > Sent via pgsql-performance mailing list ( > pgsql-performance@ > ) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance Still I can't undesrtand why pg_dump has to know about all the tables? For example I have such an easy table CREATE TABLE "CLog" ( "fromUser" integer, "toUser" integer, message character varying(2048) NOT NULL, "dateSend" timestamp without time zone NOT NULL ); no foreign keys, it doesn't use partitioning, it doesn't have any relations to any other table. Why pg_dump has to gother information about ALL the tables in the database just to dump one this table? -- View this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5731188.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance