On 3/2/08 11:15 AM, "Tom Lane" <tgl@xxxxxxxxxxxxx> wrote: > "Michael Goldner" <MGoldner@xxxxxxxxxxxx> writes: >> Am I stuck in a loop, or is this happening because the size of the relation >> is so large that postgres is operating on smaller chunks? > > It's removing as many dead rows at a time as it can handle. Arithmetic > suggests that you've got maintenance_work_mem set to 64MB, which would > be enough room to process 11184810 rows per index scanning cycle. > That is exactly correct regarding maintenance_work_mem. Maybe I should double this. I think my server has sufficient memory. Is there any downside? > The fact that there are so many dead large objects is what I'd be > worrying about. Does that square with your sense of what you've > removed, or does it suggest you've got a large object leak? Do you > use contrib/lo and/or contrib/vacuumlo to manage them? I am in the process of archiving off about 1.4 million large objects from a total of 1.7 million. I removed about 700,000 before running the vacuum. Each large object averages about 256k, with outliers as big as 100MB. At 2k per row in pg_largeobject (if I understand correctly), the number of dead large objects doesn't seem unreasonable. > > The numbers also suggest that you might be removing all or nearly > all of the rows in pg_largeobject. If so, a CLUSTER on it might > be more effective than VACUUM as a one-shot cleanup method. > My understanding is that CLUSTER is a blocking operation. My database supports a 24x7 operation, so I cannot bring the system offline for extended periods. How fast is the "CLUSTER" operation? Given my database, am I looking at minutes, hours, or days? Can a CLUSTER be cancelled without negative consequences if I see it is taking longer than I can allow? Thanks -- Mike Goldner -- Do not post admin requests to the list. They will be ignored. pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) Help/Unsubscribe/Update your Subscription: http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-admin