On 5/14/20 8:35 AM, Eduard Rozenberg wrote:
Hello there,
I'm a long-time postgres user but vacuumdb newbie :).
Using vacuumdb (v12.2) with '---jobs' to vacuum a v9.6 database on
localhost with parallel processes:
$ time vacuumdb --full --echo -U postgres --jobs=8 -d mydatabase_test -p
5434 --password
As shown below I ran into the (100%?) predictable/expected deadlock with
sys catalog tables documented in the vacuumdb doc page
(https://www.postgresql.org/docs/12/app-vacuumdb.html).
Question: I specified "-d mydatabase_test" - why is it also doing a full
vac on the system catalog (postgres db) which is practically guaranteed
The postgres database and the system catalogs are two separate things.
to cause a deadlock? I don't need or want it to do a full vac on the
postgres db as I can do that myself later if I need to, and it takes no
time compared to full vac of my own 7 TB database that I am "extremely
eager" to parallelize.
See:
https://www.postgresql.org/docs/12/sql-vacuum.html
"FULL
Selects “full” vacuum, which can reclaim more space, but takes much
longer and exclusively locks the table. This method also requires extra
disk space, since it writes a new copy of the table and doesn't release
the old copy until the operation is complete. Usually this should only
be used when a significant amount of space needs to be reclaimed from
within the table.
"
Pretty sure all that rewriting of tables bloats the system catalogs and
so they get a VACUUM FULL.
The question here is why is it necessary to do a VACUUM FULL in the
first place?
Is my only option list all schemas.tables and feed to vacuumdb as "-t
schema1.table1 -t schema1.table2 ........" ? That's manageable but
unfortunate as it creates more work (having to list all tables) and adds
additional work for automation scripts as well.
I've tried searching forums/goog for past issues related to this. I did
find an old discussion about possible enhancements to "vacuumdb --jobs"
to help the user avoid these predictable deadlocks, where the conclusion
was to let the user figure out a way to deal with it such as via "-t
table -t table -t table .......".
Maybe "--jobs" should only apply to the user's own databases, and when
vacuumdb is working on system catalog tables it should apply a strictly
serial/sequential approach to handle those tables?
Thanks!
------------------------------------------
output from vacuumdb's --echo
------------------------------------------
...
VACUUM (FULL) mycompanytesting.sometable1;
VACUUM (FULL) index.mydatabasestd_missing;
VACUUM (FULL) mycompanytesting.sometable2;
VACUUM (FULL) mycompanytesting.sometable3;
VACUUM (FULL) pg_catalog.pg_attribute;
VACUUM (FULL) mycompanydata.sometable4;
VACUUM (FULL) pg_catalog.pg_statistic;
VACUUM (FULL) mycompanytesting.sometable5;
VACUUM (FULL) pg_catalog.pg_proc;
VACUUM (FULL) pg_catalog.pg_depend;
VACUUM (FULL) pg_catalog.pg_class;
VACUUM (FULL) pg_catalog.pg_type;
vacuumdb: error: vacuuming of database "mydatabase_test" failed: ERROR:
deadlock detected
DETAIL: Process 28183 waits for AccessShareLock on relation 1259 of
database 35239378; blocked by process 28182.
Process 28182 waits for AccessShareLock on relation 1247 of database
35239378; blocked by process 28183.
HINT: See server log for query details.
------------------------------
/var/log/postgresql-9.6
------------------------------
...
LOG: received SIGHUP, reloading configuration files
ERROR: canceling autovacuum task
CONTEXT: automatic vacuum of table "mydatabase_test.someschema.sometable"
ERROR: canceling autovacuum task
CONTEXT: automatic vacuum of table "mydatabase_test.someschema.sometable"
ERROR: canceling autovacuum task
CONTEXT: automatic vacuum of table
"mydatabase_test.pg_toast.pg_toast_35240266"
LOG: could not receive data from client: Connection timed out
LOG: received SIGHUP, reloading configuration files
LOG: received SIGHUP, reloading configuration files
ERROR: canceling autovacuum task
CONTEXT: automatic vacuum of table "mydatabase_test.someschema.sometable"
LOG: received SIGHUP, reloading configuration files
LOG: received SIGHUP, reloading configuration files
ERROR: canceling autovacuum task
CONTEXT: automatic vacuum of table
"mydatabase_test.pg_toast.pg_toast_35240330"
ERROR: canceling autovacuum task
CONTEXT: automatic vacuum of table "mydatabase_test.someschema.sometable"
ERROR: deadlock detected
DETAIL: Process 28183 waits for AccessShareLock on relation 1259 of
database 35239378; blocked by process 28182.
Process 28182 waits for AccessShareLock on relation 1247 of database
35239378; blocked by process 28183.
Process 28183: VACUUM (FULL) pg_catalog.pg_type;
Process 28182: VACUUM (FULL) pg_catalog.pg_class;
HINT: See server log for query details.
STATEMENT: VACUUM (FULL) pg_catalog.pg_type;
ERROR: canceling statement due to user request
STATEMENT: VACUUM (FULL) pg_catalog.pg_class;
ERROR: canceling statement due to user request
STATEMENT: VACUUM (FULL) pg_catalog.pg_depend;
ERROR: canceling statement due to user request
STATEMENT: VACUUM (FULL) someschema.sometable
LOG: could not send data to client: Broken pipe
FATAL: connection to client lost
LOG: could not send data to client: Broken pipe
FATAL: connection to client lost
ERROR: canceling statement due to user request
STATEMENT: VACUUM (FULL) mycompanydata.wildcard_replacement_bkp;
LOG: could not send data to client: Broken pipe
STATEMENT: VACUUM (FULL) mycompanydata.wildcard_replacement_bkp;
FATAL: connection to client lost
ERROR: canceling statement due to user request
STATEMENT: VACUUM (FULL) pg_catalog.pg_proc;
LOG: could not send data to client: Broken pipe
STATEMENT: VACUUM (FULL) pg_catalog.pg_proc;
FATAL: connection to client lost
ERROR: canceling statement due to user request
STATEMENT: VACUUM (FULL) mycompanydata.cpc_tag_score2;
LOG: could not send data to client: Broken pipe
STATEMENT: VACUUM (FULL) mycompanydata.cpc_tag_score2;
LOG: could not send data to client: Broken pipe
FATAL: connection to client lost
FATAL: connection to client lost
LOG: received SIGHUP, reloading configuration files
LOG: received SIGHUP, reloading configuration files
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx