On 7/4/2014 9:18 PM, Tom Lane wrote:
There are only 32 table, no functions, but mostly large objects. Not
sure how to know about the LOs, but a quick check from the table sizes I
estimate at only 2GB, so 16GB could be LOs. There are 7,528,803 entries
in pg_catalog.pg_largeobject.
Hmm ... how many rows in pg_largeobject_metadata?
pg_largeobject_metadata reports 1,656,417 rows.
By the way, what is pg_largeobject_metadata vs. pg_largeobject since the
counts are so different?
7547 esignfor 30 10 1148m 1.0g 852 S 2.3 26.9 14:10.27 pg_dump
--format=c --oids ibc01
I haven't tested it for any side issues, but the --oids can probably be
removed as we don't cross reference against OID columns anymore (all
OIDs are just a field in a table that uses a UUID now for
cross-referencing). But removing it seemed to make no difference if
overall time for the pg_dump to complete.
That's a pretty large resident size for pg_dump :-( ... you evidently
have a lot of objects of some sort, and I'm betting it's LOs, but
let's make sure.
Is there postgresql.conf setting that might help? It's a small 1GB RAM
Linux VM with Tomcat web server (we give it 500-700MB) with PG DB on
it. We don't do much but change max-connections to 70, shared_buffers
to 128MB, maintenance_work_mem to 120MB, checkpoint_segments to 6.
But in the end, I guess the main question is why the backup takes longer
than the restore, which just seems counter-intuitive to me.
Thanks for all your help and thinking about it!