Hi,
"Speeding up dump/restore process"
Maybe - you can re-use this backup tricks.
"Speeding up dump/restore process"
for example:
"""
Idea was: All these tables had primary key based on serial. We could easily get min and max value of the primary key column, and then split it into half-a-million-ids “partitions", then dump them separately using:
psql -qAt -c "COPY ( SELECT * FROM TABLE WHERE id BETWEEN x AND y) TO STDOUT" | gzip -c - > TABLE.x.y.dump
psql -qAt -c "COPY ( SELECT * FROM TABLE WHERE id BETWEEN x AND y) TO STDOUT" | gzip -c - > TABLE.x.y.dump
"""
best,
Imre
Durgamahesh Manne <maheshpostgres9@xxxxxxxxx> ezt írta (időpont: 2019. aug. 30., P, 11:51):
HiTo respected international postgresql teamI am using postgresql 11.4 versionI have scheduled logical dump job which runs daily one time at db levelThere was one table that has write intensive activity for every 40 seconds in dbThe size of the table is about 88GBLogical dump of that table is taking more than 7 hours to be completedI need to reduce to dump time of that table that has 88GB in sizeRegardsDurgamahesh Manne