On 04/03/20, David G. Johnston (david.g.johnston@xxxxxxxxx) wrote: > On Wed, Mar 4, 2020 at 4:41 PM Rory Campbell-Lange <rory@xxxxxxxxxxxxxxxxxx> > wrote: > > > Any idea on how to run execute_dynamic across many databases at roughly > > the same time? > > > > I'm just wondering if Guyren Howe's idea of having many transactions > > open waiting for a clock time to commit is in fact feasible due to > > (presumably) having to have all the connections open to every database > > from the client until the transactions complete. > > > > Clock time synchronization is possible so its largely a matter of resources > at that point. If your servers are on machines where you can get shell > having the server run psql on its own databases should provide sufficient. Yes, that is how we do it at present. We'll have to do some tests. > I'll go back to my earlier comment, on a separate line of thought, which > may have been missed, in that having two commits involved here is probably > a better option. First commit is setup to allow both the old and new > software to continue working normally. The second commit then removes the > functionality the older software versions are using - after they've been > phased out. I did miss that point; thanks for reiterating it. I think the issue we will have with old/new coexistence is that we would sometimes hit the "cannot find best candidate" function signature problem, as we often extend existing function arguments with new arguments with defaults. But it is certainly something worth testing. Thanks a lot for the pointers.