Ivan, * Ivan Voras (ivoras@xxxxxxxxx) wrote: > I'm investigating options for an environment which has about a dozen > servers and several dozen databases on each, and they occasionally need to > run huge reports which slow down other services. This is of course "legacy > code". After some discussion, the idea is to offload these reports to > separate servers - and that would be fairly straightforward if not for the > fact that the report code creates temp tables which are not allowed on > read-only hot standby replicas. You could create a new server which has postgres_fdw connections to your read-only replicas and run the reporting code there. That could suck, of course, since the data would have to be pulled across to be aggregated (assuming that's what your reporting script is doing). If you can't change the reporting script at all, that might be what you have to do though. Be sure to look at the postgres_fdw options about batch size and how planning is done. If you can change the reporting script, another option is to create FDWs on your primary servers with FDW tables that point to some other server and then have the reporting script use the FDW tables as the temp or destination tables on the replica. The magic here is that FDW tables on a read-only replica *can* be written to, but you have to create the FDW and the FDW tables on the primary and let them be replicated. As also mentioned, you could use trigger-based replication (eg: bucardo, slony, etc) instead of block-based, or you could look at the logical replication capabilities (pg_logical) to see about using that for your replica-for-reporting instead. Thanks! Stephen
Attachment:
signature.asc
Description: Digital signature