Hi there,
I originally posted these questions to the pgsql-performance mailing list, but due to lack of response, I think that these may be more general in nature--so I'm re-posting them here. Apologies for the cross-posting ahead of time.
We are having real issues trying to reclaim dead blob space via VACUUM FULL in Postgres 8.4.2 (Linux). We have generated this large amount of blob garbage through application code and lack of running vacuumlo often enough. In any case, we have done a vacuumlo, and "vacuum analyze pg_largeobject"--but when we go to run "vacuum full pg_largeobject" [to reclaim the many gigabytes of dead disk space now in there] the vacuum process runs without end for days, becoming CPU bound to 1 CPU, barely doing any I/O, and of course locking the pg_largeobject table exclusively.
Therefore, we have the following questions:
1) is there any easy way to fiddle with the vacuum process so that
it is not CPU bound and doing very little I/O? Why would vacuum full be
CPU bound anyway???
2) is it possible to interrupt VACUUM FULL, then re-start it later
on and have it pick up where it was working before? This way we could do the cleanup in pieces.
3) are there
any alternatives, such as CLUSTER (which doesn't seem to be allowed
since pg_largeobject is a system table) that would work? Trying CLUSTER on the pg_largeobject table yields this: ERROR:
"pg_largeobject" is a system catalog
If anybody could help out with these questions, it would be GREATLY appreciated....
Sam