Search Postgresql Archives

Re: Regarding db dump with Fc taking very long time to completion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Maybe - you can re-use this backup tricks.

"Speeding up dump/restore process"
https://www.depesz.com/2009/09/19/speeding-up-dumprestore-process/

for example:
"""
Idea was: All these tables had primary key based on serial. We could easily get min and max value of the primary key column, and then split it into half-a-million-ids “partitions", then dump them separately using:
psql -qAt -c "COPY ( SELECT * FROM TABLE WHERE id BETWEEN x AND y) TO STDOUT" | gzip -c - > TABLE.x.y.dump
"""

best,
Imre



Durgamahesh Manne <maheshpostgres9@xxxxxxxxx> ezt írta (időpont: 2019. aug. 30., P, 11:51):
Hi
To respected international postgresql team

I am using postgresql 11.4 version 
I have scheduled logical dump job which runs daily one time at db level
There was one table that has write intensive activity for every 40 seconds in db
The size of the table is about 88GB
 Logical dump of that table is taking more than 7 hours to be completed 

 I need to reduce to dump time of that table that has 88GB in size


Regards
Durgamahesh Manne




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux