Thanks again for the hard work, guys. When I said that the schemas were empty, I was talking about data, not tables. So you are right that each schema has ~20 tables (plus indices, sequences, etc.), but pretty much no data (maybe one or two rows at most). Data doesn't seem to be so important in this case (I may be wrong though), so the sample database should be enough to find the weak spots that need attention. > but in the mean time it can be circumvented > by using -Fc rather than -Fp for the dump format. > Doing that removed 17 minutes from the run time. We do use -Fc in our production server, but it doesn't help much (dump time still > 24 hours). Actually, I tried several different dump options without success. It seems that you guys are very close to great improvements here. Thanks for everything! Best, Hugo -- View this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5710341.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance