Michael Goldner <mgoldner@xxxxxxxxxxxx> writes: > The pg_largeobject table, however, seems a bit odd: > INFO: vacuuming "pg_catalog.pg_largeobject" > INFO: index "pg_largeobject_loid_pn_index" now contains 105110204 row > versions in 404151 pages > DETAIL: 778599 index row versions were removed. > 0 index pages have been deleted, 0 are currently reusable. > CPU 21.24s/48.07u sec elapsed 273.15 sec. > INFO: "pg_largeobject": removed 778599 row versions in 775264 pages > DETAIL: CPU 54.73s/29.70u sec elapsed 2203.32 sec. > INFO: "pg_largeobject": found 778599 removable, 105094846 nonremovable row > versions in 34803136 pages > DETAIL: 0 dead row versions cannot be removed yet. > There were 70 unused item pointers. > 0 pages are entirely empty. > CPU 1031.40s/339.21u sec elapsed 10875.66 sec. > VACUUM > In particular, "105094846 nonremovable row versions in 34803136 pages" seems > really high given that there are only about 400,000 large objects. Well, each row in pg_largeobject is a 2K (at most) chunk of a large object. There is something funny here because if there's only 100GB of LO data, that would average out to less than 1K per row, which is half what I'd expect. Do you have another source of large objects that are not-so-large-as-all-that and might be dragging down the average? It might be interesting to look at stats such as select sum(length(data)) from pg_largeobject; to confirm that your 100GB estimate for the data payload is accurate. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq