On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin <atsaloli.tech@xxxxxxxxx> wrote: > Is there any way to consolidate the pages on the slave without taking > replication offline? Filip Rembiałkowski suggested: maybe CLUSTER? Greg Williamson suggested: pg_reorg Thank you, Filip and Greg. They would both work IF I had enough free space on the slave, which, sadly, I do not. :( CLUSTER requires free space at least equal to the sum of the table size and the index sizes. pg_reorg rquires amount of space twice larger than target table and indexes. Too bad I can't say "CLUSTER TABLE tablename USING_ARRAY /dev/temp/array" or something like that, using an external array for temporary storage just for the cluster. I do have an external USB drive with more than enough free space on it. I've got a maintenance window scheduled for tomorrow night to get rid of the holes in the pages on the slave. My plan is to shut down the application, destroy the replication set, re-create it, and start replication, which will do a full sync. It's a litle overkill but I have this procedure documented and I've done it before. I expect that after the table is TRUNCATE'd on the Slave and COPY'ied over from the Master, we won't have holes in the pages.... I sure hope so!! Our database is about 200 GB - over a WAN link, last time it took 8 hours to do a full sync, I expect it'll be more like 9 or 10 hours this time. I still don't know where these holes came from... and what is responsible. Our Postgres 8.4.9 or Slony 1.2.21 or some combination of the two? We don't delete rows from that table AFAIK so I'm really confused why the page contents would be so sparse... and since I'm not fixing the root cause I'm sure the problem will come back. But at least this keeps my slave operational (which it won't be once the filesystem hits 100%) Thanks for all the help and suggestions so far. I really appreciate it! Any more pointers would be welcome... Too bad there is no way to compact the rows/pages within the table without using an temporary table. That would be the silver bullet for my particular predicament. Yours, Aleksey -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general