Also, you can use multi process dump and restore using pg_dump plus pigz utility for zipping.
Thanks
On Tue, Jul 16, 2024, 4:00 AM Laurenz Albe <laurenz.albe@xxxxxxxxxxx> wrote:
On Mon, 2024-07-15 at 14:47 -0400, Thomas Simpson wrote:
> I have a large database (multi TB) which had a vacuum full running but the database
> ran out of space during the rebuild of one of the large data tables.
>
> Cleaning down the WAL files got the database restarted (an archiving problem led to
> the initial disk full).
>
> However, the disk space is still at 99% as it appears the large table rebuild files
> are still hanging around using space and have not been deleted.
>
> My problem now is how do I get this space back to return my free space back to where
> it should be?
>
> I tried some scripts to map the data files to relations but this didn't work as
> removing some files led to startup failure despite them appearing to be unrelated
> to anything in the database - I had to put them back and then startup worked.
>
> Any suggestions here?
That reads like the sad old story: "cleaning down" WAL files - you mean deleting the
very files that would have enabled PostgreSQL to recover from the crash that was
caused by the full file system.
Did you run "pg_resetwal"? If yes, that probably led to data corruption.
The above are just guesses. Anyway, there is no good way to get rid of the files
that were left behind after the crash. The reliable way of doing so is also the way
to get rid of potential data corruption caused by "cleaning down" the database:
pg_dump the whole thing and restore the dump to a new, clean cluster.
Yes, that will be a painfully long down time. An alternative is to restore a backup
taken before the crash.
Yours,
Laurenz Albe