On 5/14/20 11:27 AM, Eduard Rozenberg wrote:
@Adrian thanks.
I did a test vacuum full the "normal" non-parallel way (VACUUM FULL sql)
previously on a test db copy and saw the DB size (postgres 9.6) shrink
from 6.8 TB to 4.4 TB of actual, real disk space usage ($ df -d 1).
I don't know the reason so much space was "locked up" (other than there
is a lot of data "churn" from daily updates). But I definitely do need
to do the vac full on the production db to get down to the smaller size
- cannot afford the 2.4 TB of "wasted" space on an ongoing basis.
It may not be wasted space. A regular VACUUM marks space within a table
available for reuse(and not returned to OS) when it removes unneeded
tuples. It then fills that space up with new tuples, roughly speaking.
So if the vacuuming is working properly you will reach a steady state
where space is reused and the database on disk size grows slowly as
reusable space is occupied. I would monitor the database size on a
regular basis. My guess is that the VACUUM FULL is dropping the OS used
space and then it fills up again as the database does those updates.
Based on your email it sounds like the vacuuming of those pg_catalog.*
tables is done regardless, as a normal part of doing vac full on my own
database.
Unfortunately I still don't see an ideal way to run vacuum full in
parallel via vacuumdb without running into the expected and documented
deadlock. Only method I'm aware of is to list each table individually
with "-t table1 -t table2..." to "vacuum db --jobs" which is not
pleasant and not exceedingly beautiful.
Thanks.
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx