Michael Goldner <mgoldner@xxxxxxxxxxxx> writes: >> The fact that there are so many dead large objects is what I'd be >> worrying about. Does that square with your sense of what you've >> removed, or does it suggest you've got a large object leak? Do you >> use contrib/lo and/or contrib/vacuumlo to manage them? > I am in the process of archiving off about 1.4 million large objects from a > total of 1.7 million. OK, and no doubt they're the 1.4 million oldest ones? So what we're seeing is VACUUM removing all of the rows in the earlier part of the table, but when it eventually gets to the end there will be a lot of non-removed rows. >> The numbers also suggest that you might be removing all or nearly >> all of the rows in pg_largeobject. If so, a CLUSTER on it might >> be more effective than VACUUM as a one-shot cleanup method. > My understanding is that CLUSTER is a blocking operation. My database > supports a 24x7 operation, so I cannot bring the system offline for extended > periods. You wouldn't want to do a CLUSTER then. But you're going to have an awful lot of dead space in pg_largeobject if you don't. Might want to think about it during your next scheduled maintenance window (you do have some I hope). The way to avoid getting into this situation in future is to archive on a more routine basis. Removing 10% of the rows at a time doesn't put you in a situation where you desperately need to reclaim that space. You can just wait for it to get used up again during normal operations. It's a bit harder to adopt that attitude when you know that 80% of pg_largeobject is dead space. What's more, you're going to have to boost max_fsm_pages quite a lot, or the dead space won't get reused very effectively at all ... regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) Help/Unsubscribe/Update your Subscription: http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-admin