pg_restore >10million large objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

There have been some questions about pg_dump and huge numbers of large objects 
recently. I have a query about the opposite.
How can restoring a database with a lot of large objects run faster?

My database has a relatively piddling 13 million large objects, so dumping it 
isn't a problem.
Restoring it is a problem though.
This is for a migration from 8.4 to 9.3. The dump is taken using pg_dump from 
9.3.


I've run a test on a significantly smaller test system.
~4GB overall, and 1.1 million large objects. It took 2 hours, give or take.
The server it's on isn't especially fast though.
It seems that each "SELECT pg_catalog.lo_create('xxxxx');" is run 
independently and sequentially, despite having --jobs=8 specified.


Is there any magic incantation, or animal sacrifice, I can make to get those 
lo_create() calls run in parallel?
Our 9.3 production servers have 12 cores (plus HT) and SSDs, so can do many 
queries at the same time.


Thanks

-- 
Mike Williams


-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux