Search Postgresql Archives

Re: pg_dump with 1100 schemas being a bit slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2009-10-07 at 12:51 +0200, Loic d'Anterroches wrote:
> Hello,

> My problem is that the dump increased steadily with the number of
> schemas (now about 20s from about 12s with 850 schemas) and pg_dump is
> now ballooning at 120MB of memory usage when running the dump.
> 

And it will continue to. The number of locks that are needing to be
acquired will consistently increase the amount of time it takes to
backup the database as you add schemas and objects. This applies to
whether or not you are running a single dump per schema or a global dump
with -Fc.

I agree with the other participants in this thread that it makes more
sense for you to use -Fc but your speed isn't going to change all that
much overall.

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - Salamander


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux