Re: Error when execute insert/update

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2022-Sep-11, Leandro Bugalho wrote:

> Hi guys, I have a PostgreSQL 9.5 server, which started showing the
> following message in the log:
> 
> ,ERROR,54000,"multixact ""members"" limit exceeded","This command
> would create a multixact with 2 members, but the remaining space is
> only enough for 0 members.","Run a database-wide VACUUM in database
> with OID 139059 with reduced vacuum_multixact_freeze_min_age and
> vacuum_multixact_freeze_table_age settings.",,,"SQL statement ""SELECT
> 1 FROM ONLY ""public"".""alf_server"" x WHERE ""id""
> OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x""","insert into
> 
> I started the vacuum in the database , but the same for the size ,I
> believe it will take a long time. Is there any way to increase this
> limit member while the vacuum is running to free up the application?

Sorry I'm late to the party.  I hope you're already out of this hole.
So this reply is for the archives' sake.

There's no configuration parameter you can change that will help you in
this case.

One way out of this that's perhaps a tad quicker than vacuuming the
entire database, is vacuuming just the table(s) that have the oldest
multixacts.  If you're lucky, it's just a not-so-big table that can be
dealt with quicker than the entire database.
You're looking for the table(s) with the highest values of
mxid_age(pg_class.relminmxid).  Have VACUUM process them successfully
and their relminmxid will advance to a current value, and that will
enable VACUUM to move the global minmxid limit forward and delete the
oldest multixact data, giving some extra room for new multixacts to be
created.

Now, it is suprising that autovacuum didn't already did this by itself.
It definitely should have.  However, there are some problem cases:

Sometimes, this is caused by tables that have corrupted data; those
cause vacuum to terminate with an error without advancing the limit, so
your database as a whole is stuck in old-multixactid land.  If you can
fix the corruption and let vacuum complete, that will work.  If you
cannot, you could just DROP the table.  If it has valuable data, maybe
you can CREATE TABLE AS SELECT FROM corrupt_table, with enough smarts to
avoid falling over due to the corruption.

Another cause we've seen is leftover temp tables: autovacuum cannot
process them because temp tables cannot be read/written by sessions
other than the one that created them.  What you can do in this case is
just drop those temp tables.  We added some additional protections for
this case in Postgres 10, but 9.5 still had some holes.  This is
particularly problematic if the server crashes and there is some session
that is using temp tables, and afterwards the session ID is not reused.
Those temp tables would be dropped at startup by another session that
uses the same ID, but if --say-- the ID is very high and you don't have
that many sessions again afterwards, they won't.

(If you do DROP any tables, you will need to vacuum some other table
afterwards, to let the code that advances the minimum have a chance to
run.)

-- 
Álvaro Herrera               48°01'N 7°57'E  —  https://www.EnterpriseDB.com/
"Cómo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar
al alfarero y ver qué formas se pueden sacar del otro" (C. Halloway en
La Feria de las Tinieblas, R. Bradbury)





[Index of Archives]     [Postgresql Home]     [Postgresql General]     [Postgresql Performance]     [Postgresql PHP]     [Postgresql Jobs]     [PHP Users]     [PHP Databases]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Yosemite Forum]

  Powered by Linux