David Wall <d.wall@xxxxxxxxxxxx> writes: >>> There are only 32 table, no functions, but mostly large objects. Not >>> sure how to know about the LOs, but a quick check from the table sizes I >>> estimate at only 2GB, so 16GB could be LOs. There are 7,528,803 entries >>> in pg_catalog.pg_largeobject. >> Hmm ... how many rows in pg_largeobject_metadata? > pg_largeobject_metadata reports 1,656,417 rows. > By the way, what is pg_largeobject_metadata vs. pg_largeobject since the > counts are so different? There's one row in pg_largeobject_metadata per large object. The rows in pg_largeobject represent 2KB "pages" of large objects (so it looks like your large objects are averaging only 8KB-10KB apiece). The "metadata" table was added in 9.0 to carry ownership and access permission data for each large object. I think this report confirms something we'd worried about during 9.0 development, which was whether pg_dump wouldn't have issues with sufficiently many large objects. At the time we'd taught it to handle LOs as if they were full-fledged database objects, since that was the easiest way to piggyback on its existing machinery for handling ownership and permissions; but that's rather expensive for objects that don't really need all the trappings of, eg, dependency tracking. We'd done some measurements that seemed to indicate that the overhead wasn't awful for medium-size numbers of large objects, but I'm not sure we tried it for millions of 'em. I guess the good news is that it's only being a bit slow for you and not falling over completely. Still, it seems like some more work is indicated in this area. regards, tom lane