On 17 March 2016 at 10:21, David G. Johnston <david.g.johnston@xxxxxxxxx> wrote:1 - The problem here is that a VACUUM FULL will lock all the DB to wirte, am I right? My DB is 1.7 TB, so it will take a while and the System can't be offline
- Migrate the files to the NFS server
- Delete the schema from the MASTER DB
- Put the slaves into read-only servers
- Run Vacuum FULL into the MASTER DB
- Once the vacuum is done, do a DUMP from the MASTER to the slaves (excluding the GORFS schema of course)
If you are removing the entire object there should be no cause to VACUUM FULL. A vacuum full reclaims unused space within a given relation.Both DROP TABLE and TRUNCATE have the effect of (near) immediately freeing up the disk spaced used by the named table and returning it to the operating system.You want to use VACUUM FULL tablename; if you remove a significant chuck of a table using DELETE or UPDATE and want to reclaim the spaced that was occupied by the older version of the row within "tablename".VACUUM FULL; simply does this for all tables - I'm not sure when locks are taken and removed. likely only the actively worked on tables are locked - but the I/O hit is global so targeted locking only buys you so much.David J.I see..so in your opinion a DROP SCHEMA and maybe a VACUUM (not full) would be enough?
I don't deal with Hot Standby's in my day-to-day but if you DROP SCHEMA all of the spaced consumed by indexes and tables in that schema will be freed. The vacuum might make a small difference in performance on the system catalogs (pg_class, stats, etc) that were updated but with respect to the dropped schema there won't be anything present there for vacuum to touch.
Create and populate a dummy table in a test setup, measure the HD space taken in PGDATA, then drop it and measure again to see it in action.
I've only done this using "TRUNCATE" - I've got a system with space constraints a the same kind of "file data" table and freed up around 20GB with a single fast truncate (though ensuring FKs wouldn't be a problem was fun...).
David J.