Search Postgresql Archives

Vacuum very big table - how the full vacuum works in background/internally?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

We stored some binaries in largeobjects.
Because of that the table size now 80 GB.
We deleted 80% of records (lo_unlink), and autovacuum reclaimed space for new elements. So the table don't grow anymore, but we have to get more space in this server.

We can delete 99% of these records, but for really reclaim free space in HDD we need to run full vacuum.

For this operation we need to know how the PGSQL vacuum works in the background.

Some of admins said to us that:
a.) It copies the table fully (minium 66 GB space needed).
b.) Then it deletes the unneeded data.
In this case we need extra empty space in a temporary period, and more time (the copy of 66 GB could be slow in SSD too).

The DBISAM/ElevateDB, ZIP file deletion, VirtualBOX VDI Compact works as:
a.) It locks the original file/table.
b.) Copy remaining elements to new (first empty) file.
c.) Then it removes old file, and use new.
In this case we need only very limited empty space (3-4 GB), and the operation is much faster (because of less HDD operation).

Please help me, how the PGSQL full vacuum works internally? (1., 2. case, or something else)

How we (and the clients) prepare to this operation?
We must know it to avoid disk out problems, and too much off-time.

Thank you for your help!

Best regards
   dd


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux