Patrick B wrote: > I have a database which is 4TB big. We currently store binary data in a bytea data type column > (seg_data BYTEA). The column is behind binary_schema and the files types stored are: pdf, jpg, png. > Questions: > > 1 - If I take out 500GB of bytea data ( by updating the column seg_data and setting it to null ), will > I get those 500GB of free disk space? or do I need to run vacuum full or either pg_dump? You'll need VACUUM (FULL) or dump/restore. > 2 - If I choose going ahead with VACUUM FULL, I have 3 streaming replication slaves, Will I need to > run the vacuum full on them too? No, and indeed you cannot. The changes made by VACUUM on the primary will be replicated. > 3 - [2] vacuum full needs some free disk space as same size as the target table. It locks the table > (cannot be used while running vacuum full) and a REINDEX might be needed after. AM I right? It locks the table for all concurrent access, but a REINDEX is not necessary, as the indexes are rewritten as well. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general