On Fri, Aug 30, 2019 at 11:51 AM Durgamahesh Manne <maheshpostgres9@xxxxxxxxx> wrote: > Logical dump of that table is taking more than 7 hours to be completed > > I need to reduce to dump time of that table that has 88GB in size Good luck! I would see two possible solutions to the problem: 1) use physical backup and switch to incremental (e..g, pgbackrest) 2) partition the table and backup single pieces, if possible (constraints?) and be assured it will become hard to maintain (added partitions, and so on). Are all of the 88 GB be written during a bulk process? I guess no, so maybe partitioning you can avoid locking the whole dataset and reduce contention (and thus time). Luca