Search Postgresql Archives

Improving pg_dump performance when handling large numbers of LOBs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We've inherited a series of legacy PG 12 clusters that each contain a database that we need to migrate to a PG 15 cluster. Each database contains about 150 million large objects totaling about 250GB. When using pg_dump we've found that it takes a couple of weeks to dump out this much data.  We've tried using the jobs option with the directory format but that seems to save each LOB separately which makes moving the resulting dump to another location unwieldy.  Has anyone else had to deal with dumping a database with these many LOBs?  Are there any suggestions for how to improve performance?

Thanks,

Wyatt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux