Hi Thorsten: On Thu, Jul 15, 2021 at 6:30 PM Thorsten Schöning <tschoening@xxxxxxxxxx> wrote: > I need to backup multiple host with multiple Postgres databases each. > In all of those cases I'm interested to backup all databases, which > makes pg_dumpall a natural choice at first glance. Though, from my > understanding of the docs that is only capable of storing all > databases per host into one single file. I think what you are naming host is better named server/cluster/instance ( a host may be running several postgres installations, pg_dumpall dos not dump the wole host just the instance you point it too ). ... > So, is there some option I'm missing telling pg_dumpall to dump into > individual files, simply named after e.g. the dumped databases? > If not, was a feature like that discussed already or what's the > reasons to not do that? There are a lot of search results how to dump > all databases with lots of different scripting approaches. Many of > those could simply be avoided with pg_dumpall supporting that > already. It would probably complicate it, and dumping a whole cluster using something like pg_dumpall -g for the globals plus a loop over the databases using something like pg_dump -Fc ( which I would always recommend over plain sql format ) is just a ( little complex ) one liner or a 10 line script, probably not worth the scarce developer / maintainer brain cycles. > Tools like BorgMatic making use of pg_dumpall might benefit of such a > feature as well: They might, but in a project of the (apparent, have not dug out much ) size of that, I would possibly just include an script, or dump the databases as individual backup objects ( different retention cycles / copies per database, skipping of dev/test databases etc). The script to dump a whole cluster is just one line for the pg_dumpall, one psql line to grab the database names and another for a loop pg_dumping all of them. And from them on you can improve it a bit for special client software purposes, it does not seem to hold its weight. FOS