Re: shared memory size during upgrade pgsql with partitions (max_locks_per_transaction)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> On Tue, Dec 17, 2019 at 08:03:41PM +0000, Piotr Włodarczyk wrote:
>> Currently we're working on PSQL 11.5 and we're trying upgrade to 12.1.
>> 
>> During that we have a problem:
>> 
>> command: "/usr/pgsql-12/bin/pg_dump" --host /cluster/postgresql --port
50432
>> --username postgres --schema-only --quote-all-identifiers
--binary-upgrade
>> --format=custom  --file="pg_upgrade_dump_281535902.custom"
'dbname=sprint'
>> >> "pg_upgrade_dump_281535902.log" 2> &1
>> pg_dump: error: query failed: ERROR:  out of shared memory
>> HINT:  You might need to increase max_locks_per_transaction.
>> pg_dump: error: query was: LOCK TABLE
>> "some_schemaa"."table_part_80000000_2018q3" IN ACCESS SHARE MODE
>> 
>> On current instance we have about one thousand of partitions, partitioned
in
>> two levels: first by id_product, and second level by quarter of the year,
as
>> you can see on above log.
>> 
>> How have we to calculate shared memory, and (eventually
>> max_locks_per_transaction) to be fit to the limits during upgrade? 
> 
> 
> 
> Great question.  Clearly, if you can run that (or similar) pg_dump
command,
> then you can pg_upgrade.  I think you could also do pg_upgrade --check,

pg_upgrade --check doesn't prompt any error or warning 

> 
> 
> 
> The query looks like
> 		FROM pg_class c...
> 		WHERE c.relkind in ('%c', '%c', '%c', '%c', '%c', '%c',
'%c') "
> 
> 
> 
> ..and then does:
> 
> 
> 
>                if (tblinfo[i].dobj.dump &&
>                        (tblinfo[i].relkind == RELKIND_RELATION ||
>                         tblinfo-> relkind == RELKIND_PARTITIONED_TABLE) &&
>                        (tblinfo[i].dobj.dump &
DUMP_COMPONENTS_REQUIRING_LOCK))
>                {
>                        resetPQExpBuffer(query);
>                        appendPQExpBuffer(query,
>                                                          "LOCK TABLE %s IN
ACCESS SHARE MODE",
>
fmtQualifiedDumpable(&tblinfo[i]));
>                        ExecuteSqlStatement(fout, query-> data);
>                }
> 
> 
> 
> ..then filters by -N/-n/-t/-T (which doesn't apply to pg_upgrade):
>                        selectDumpableTable(&tblinfo[i], fout);
> 
> 
> 
> So it looks like COUNT(1) FROM pg_class WHERE relkind IN ('r','p') should
do it.
> 
> 
> 
> But actually, during pg_upgrade, since nothing else is running, you
actually
> have max_connections*max_locks_per_transaction total locks.
> 
> 
> 
> Said differently, I think you could set max_locks_per_transaction to:
> SELECT (SELECT COUNT(1) FROM pg_class WHERE relkind IN
('r','p'))/current_setting('max_connections')::int;
> 
> 
> 
> ..probably with a fudge factor of +10 for any system process (and due to
> integer truncation).
> 
> 
> 
> Someone might say that pg_upgrade or pg_dump could check for that
specifically..

Yes, and temporarily increase, or HINT how to calculate proper value.

> 
> 
> 
> Justin
> 
> 

We realized that the problem is with pg_dump doing during pg_upgreade.

Now we're after upgrade and we can't check Yours calculation. We simply
increased max_connections until migration passed :) 

I'll try to check it on empty, fake database. 

Attachment: smime.p7s
Description: S/MIME cryptographic signature


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux