Search Postgresql Archives

Re: pg_dump with lots and lots of tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Andy Colson <andy@xxxxxxxxxxxxxxx> writes:
> pg_dump is upset that my max_locks_per_transaction is too low.  I've bumped it up several times (up to 600 so far) but now sure how many it needs.

> I'm merging 90 databases into a single database with 90 schemas.  Each schema can have 500'ish tables.  Do I need to set max_locks_per_transaction to (90*500) 45,000?  Will that even work?

The pg_dump will need about 45000 locks altogether, so anything north of
45000/max_connections should work (more if you have other sessions going
on at the same time).

Basically the lock table is sized at max_locks_per_transaction*max_connections,
and transactions can use as many entries as they want --- there's no
attempt to hold a session to its "fair share" of the table.  The parameter
is only defined as it is to ensure that if you bump up max_connections the
lock table will get bigger automatically, so you won't starve sessions of
locks accidentally.

> Will I ever need to bump up sysctl kernel.shmmax?

If the postmaster fails to start with the larger setting, then yes.
But lock entries aren't that large so probably it won't matter.
If it does matter, and increasing shmmax is inconvenient, you could
back off shared_buffers to make room.

			regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux