Re: The trap when using pg_dumpall

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jul 03, 2021 at 11:04:26PM -0700, Dean Gibson (DB Administrator) wrote:
> Well, I never store the output of pg_dumpall directly; I pipe it through
> gzip, & the resultant size differs by about 1% from the size from pg_dump in
> custom-archive format.
> 
> I also found that pg_dumpall -g doesn't get the triggers; pg_dumpall -s
> does.  I don't know if pg-dump gets the triggers.

Triggers are inside database, so normal pg_dump handles them.

Generally, I stand firmly in position that one should never use
pg_dumpall.

Size of dump is one thing, but inability to sanely filter what you will
load is deal breaker.

Plus - with modern pg_dump and -Fd, both dumping time, and loading time
can be significantly reduced thanks to parallelism.

Wrote about it:
https://www.depesz.com/2019/12/10/how-to-effectively-dump-postgresql-databases/

depesz





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux