Search Postgresql Archives

RE: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michael Lewis <mlewis@xxxxxxxxxxx> wrote on 06/16/2020 04:41:16 PM:
> Still, if you run a manual vacuum analyze verbose, what sort of output 
do you get?

On one of the instances that is exhibiting the "disk leak" behavior, the 
VACUUM ANALYZE VERBOSE command doesn't generate any output or complete 
before I loose the connection to the database (presumably because I hit a 
connection read timeout). Is it possible to configure th read timeout for 
psql?

One some of our healthy instances, we were able to run VACUUM ANALYZE 
VERBOSE:

=> vacuum verbose analyze pg_largeobject;
INFO:  vacuuming "pg_catalog.pg_largeobject"
INFO:  scanned index "pg_largeobject_loid_pn_index" to remove 630 row 
versions
DETAIL:  CPU: user: 0.06 s, system: 0.10 s, elapsed: 0.17 s
INFO:  "pg_largeobject": removed 630 row versions in 190 pages
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
INFO:  index "pg_largeobject_loid_pn_index" now contains 533 row versions 
in 18346 pages
DETAIL:  630 index row versions were removed.
18340 index pages have been deleted, 18339 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "pg_largeobject": found 577 removable, 533 nonremovable row 
versions in 399 out of 399 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 69015245
There were 550 unused item pointers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.06 s, system: 0.11 s, elapsed: 0.17 s.
INFO:  "pg_largeobject": truncated 399 to 305 pages
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
INFO:  analyzing "pg_catalog.pg_largeobject"
INFO:  "pg_largeobject": scanned 305 of 305 pages, containing 533 live 
rows and 0 dead rows; 533 rows in sample, 533 estimated total rows
VACUUM


> Are there indexes on this table that perhaps are very large and 
> needing to be rescanned many times because maintenance_work_mem
> isn't high enough to handle in a single pass?

On the same healthy instance where we were able to run the VACUUM ANALYZE 
VERBOSE, the index size does seem reasonably small:

=> SELECT pg_size_pretty (pg_indexes_size('pg_largeobject'));
-[ RECORD 1 ]--+-------
pg_size_pretty | 143 MB

But on the unhealthy instance, it is much larger:

=> SELECT pg_size_pretty (pg_indexes_size('pg_largeobject'));
 pg_size_pretty
----------------
 7241 MB
(1 row)

But it isn't clear to me if the index size is a symptom or if it's the 
actual root cause.

> You might try "create index concurrently, drop index concurrently, 
> & rename index" (reindex concurrently if you were on 
> PG 12) as a sort of online 'vacuum full' on the index(es).

Unfortunately, since pg_largeobject is a system table, the user we use to 
connect to the database doesn't have permissions to do this. We get a "
must be owner of relation pg_largeobject" error when we try to create the 
replacement index (using CREATE INDEX CONCURRENTLY).

> By the way, the best practices for these mailing list suggest 
> partial quoting and responding in-line or below, not "top posting" 
> with the entire conversation below.

My mistake! Sorry about that.


Regards,

Jim Hurne








[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux