Hi
I would suggest to backup your DB before doing such a thing.
Run Vaccum Full, (
& if you are using PG version above 9.1 use Pg_repack to reclaim the space.
Note: It can be disruptive, so planning and preparing for potential downtime is essential.
Thanks & regards
I would suggest to backup your DB before doing such a thing.
Run Vaccum Full, (
VACUUM FULL pg_catalog.pg_largeobject
) Running this on the system table might be risky Make sure you backup the database.& if you are using PG version above 9.1 use Pg_repack to reclaim the space.
Note: It can be disruptive, so planning and preparing for potential downtime is essential.
Thanks & regards
Muhammad Affan (아판)
PostgreSQL Technical Support Engineer / Pakistan R&D
Interlace Plaza 4th floor Twinhub office 32 I8 Markaz, Islamabad, Pakistan |
On Sun, Jul 21, 2024 at 3:46 AM <postgresql@xxxxxxxxxxxxxxxxxx> wrote:
Hello All,
I've got a cluster that's having issues with pg_catalog.pg_largeobject getting massively bloated. Vacuum is running OK and there's 700GB of free space in the table and only 100GB of data, but subsequent inserts seem to be not using space from the FSM and instead always allocating new pages. The table just keeps growing.
Is this a known thing, maybe something special about LOs?
Also, is the only way to recover space here a vacuum full on the table since it's a catalog table?
Thanks,--
Jon Erdman (aka StuckMojo on IRC)
PostgreSQL Zealot