Search Postgresql Archives

Re: shared memory/max_locks_per_transaction error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 14, 2008 at 7:12 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
"Kynn Jones" <kynnjo@xxxxxxxxx> writes:
> Initially I didn't know what our max_locks_per_transaction was (nor even a
> typical value for it), but in light of the procedure's failure after 3500
> iterations, I figured that it was 3500 or so.  In fact ours is only 64 (the
> default), so I'm now thoroughly confused.

The number of lock slots available system-wide is
max_locks_per_transaction times max_connections, and your procedure was
chewing them all.  I suggest taking the hint's advice if you really need
to create 3500 tables in a single transaction.  Actually, you'd better
do it if you want to have 3500 tables at all, because pg_dump will
certainly try to acquire AccessShare lock on all of them.

OK, in light of this, I'll have to either change my strategy (and schema) significantly or greatly increase max_locks_per_transaction.

I'm leaning towards the re-design option, primarily because I really don't really understand the consequences of cranking up max_locks_per_transaction.  E.g. Why is its default value 2^6, instead of, say, 2^15?  In fact, why is there a ceiling on the number of locks at all?  I'm guessing that the fact that the default value is relatively small (i.e. a couple of orders of magnitude below the number of tables I have in mind) suggests that setting this value to a huge number would be a terrible idea.  Is that so?

Thanks!

Kynn





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux