Search Postgresql Archives

Re: Any work on better parallelization of pg_dump?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 29 Aug 2016 13:38:03 +0200
hubert depesz lubaczewski <depesz@xxxxxxxxxx> wrote:

> Hi,
> we have rather uncommon case - DB with ~ 50GB of data, but this is
> spread across ~ 80000 tables.
> 
> Running pg_dump -Fd -jxx dumps in parallel, but only data, and MOST of
> the time is spent on queries that run sequentially, and as far as I can
> tell, get schema of tables, and sequence values.
> 
> This happens on Pg 9.5. Are there any plans to make getting schema
> faster for such cases? Either by parallelization, or at least by getting
> schema for all tables "at once", and having pg_dump "sort it out",
> instead of getting schema for each table separately?

Another issue I found in current implementation is how pg_restore deal with PK.
As it takes an exclusif lock on the table, it is executed alone before indexes
creation. 

Splitting the PK in unique index creation then the constraint creation might
save a lot of time as other index can be built during the PK creation.

Regards,


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux