Search Postgresql Archives

Re: pg_dump with 1100 schemas being a bit slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Harald,

>>settings up each time. The added benefit of doing a per schema dump is
>>that I provide it to the users directly, that way they have a full
>>export of their data.
>
> you should try the timing with
>
> pg_dump --format=c  completedatabase.dmp
>
> and then generating the separte schemas in an extra step like
>
> pg_restore --schema=%s --file=outputfilename.sql completedatabase.dmp
>
> I found that even with maximum compression
>
> pg_dump --format=c --compress=9
>
> the pg_dump compression was quicker then  dump + gzip/bzip/7z compression
> afterwards.
>
> And after the dumpfile is created, pg_restore will leave your database
> alone.
> (make sure to put completedatabase.dmp on a separate filesystem). You can
> even try to run more then one pg_restore --file in parallel.

Yummy! The speed of a full dump and the benefits of the per schema
dump for the users. I will try this one tonight when the load is low.
I will keep you informed of the results.

Thanks a lot for all the good ideas, pointers!
loïc

--
Loïc d'Anterroches - Céondo Ltd - http://www.ceondo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux