On Fri, Mar 4, 2011 at 4:45 PM, Adrian Klaver <adrian.klaver@xxxxxxxxx> wrote: > On Friday, March 04, 2011 2:03:23 pm Aleksey Tsalolikhin wrote: >> On Fri, Mar 4, 2011 at 7:53 AM, Adrian Klaver <adrian.klaver@xxxxxxxxx> wrote: >> >What is the data being stored in the table? >> >> For the main part, it's an XML file, we store it in the third field. >> Our XML files are 13KB - 48 KB in length. >> > > And you are positive that field has not had its storage changed to something > other than EXTENDED? Quite. I just double checked on both servers at it is EXTENDED for everything but the timestamp (5th field) which is PLAIN (on both servers). >> From my observation, when the table was under 30 GB in size, >> TOAST compression worked fine; when it got above 35 GB in size, >> now TOAST compression is not working. > > More questions:) > How do you know that? > I thought the size problem only came to light when you tried to set up another > server. > Was there some sort of monitoring going on previous to setting up the new > server? > Anything else happen around that time? OK, I have to withdraw my "observation". Let me refine it (which I am afraid makes it less useful): I've done pg_dump/restore of the database earlier, when it was smaller (< 30 GB) and did not notice such a doubling in size; I don't think it occurred. Certainly the last time we moved the database from the DR back to primary site, it did not double in size from 1 GB (it was around 1 GB then). Here is what I did see: we've had Slony replication running for a while (over half a year) from Primary to DR; and file system utilization on Primary and DR was about the same. (around 75%). Also, I had done some pg_dump's / pg_restore's from DR to Dev and Stage, and the database size was about 1:1. But most recently, I shut down replication, and modified the cluster config (added several tables and sequences; plus some tables were modified so I wanted to get a clean start on replicating them). I removed the slony schemas and re-created a Slony replication set -- my filesystem was 75% full on the master, and it hit 100% on the slave! So I lost my slave! Then I tried pg_dump/pg_restore and noticed the same thing, that one table doubles in size. Last time I did a full Slony re-sync like this was around 30 GB. Now we're up to 40-50 GB and hit the 1:2 factor. I can't think of anything else happening around this time... I'm going to try splitting the pg_dump file (40 GB) in half, and load the 20 GB file, and see how much space it takes up in the database, I'm curious if I can replicate the 1:2 swelling with this smaller table. Aleksey -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general