On Tue, 3 Feb 2015 10:53:11 -0500 Adam Hooper <adam@xxxxxxxxxxxxxx> wrote: > Hi list, > > We run a website. We once stored all sorts of files in pg_largeobject, > which grew to 266GB. This is on an m1.large on Amazon EC2 on a single, > magnetic, non-provisioned-IO volume. In that context, 266GB is a lot. > > We've since moved all but 60GB of that data to S3. We plan to reduce > that to 1GB by deleting old, unused data. Of course, pg_largeobject > will still take up 266GB because autovacuum doesn't reduce disk space. > > We want to move our entire database to an SSD volume, with as little > downtime as possible. My tentative plan: > > 1. Use CREATE TABLESPACE and pg_repack to move user tables to a temporary volume > 2. Take down Postgres, copy system-table files to the new volume, and > start up Postgres from the new volume > 3. Use pg_repack to move everything to the new volume > > This plan won't work: Step 2 will be too slow because pg_largeobject > still takes 266GB. We tested `VACUUM FULL pg_largeobject` on our > staging database: it took two hours, during which pg_largeobject was > locked. When pg_largeobject is locked, lots of our website doesn't > work. Sometimes CLUSTER is faster than VACUUM FULL ... have you tested CLUSTERing of pg_largeobject on your test system to see if it's fast enough? How big is the non-lo data? -- Bill Moran -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general