We have many databases of the same type separated for data governance reasons. They, however, share the same web front-end code. Presently, replacing functions and performing data updates on the databases in series often executes across all databases in less than a minute. (The updates are currently done with simple sql files connecting to each database and then loading a stub file pointing to each function to drop and reload, and running the data update queries.) However, for larger updates, the time when the front end code is out-of-step with the database can cause end-user problems. Unfortunately our schema arrangement isn't clean enough to swap out function schemas in a transaction to sort out that part of the problem (if in fact that would work anyway). One solution might be to do the updates in parallel. Another thought would be to somehow execute the code update from a text field in a table in each database triggered with pg_cron. Bearing in mind the possible problems of connection saturation or massive IO spikes, I'd be grateful to learn of any thoughts on how to negotiate this problem. Rory