On Sun, 2021-09-19 at 10:28 +0000, Niels Jespersen wrote: > We are often using the oracle_fdw to transfer data between Postgres (version 11+) and Oracle (version 18+). It works great. > > However I have a task at hand that requires inserting a few billion rows in an Oracle table from a Postgres query. > > insert into t_ora (a,b,c) > select a,b,c from t_pg; > > This is driven from a plpgsql stored procedure, if that matters. > > I want to optimize the running time of this. But I am unsure of which, if any, possibilities there actually is. > > Reducing the number of network roundtrips is usually a good way to increase throughput. But, how do I do that? > > If I could make the Oracle insert direct load, that would usually also increase throughput. > But, is that possible here. There are no constraints defined on the destinaton tables. The cause of the bad performance for bulk data modifications is that the FDW API is built that way: each row INSERTed means a round trip between PostgreSQL and Oracle. That could be improved by collecting rows and inserting them in bulk on the Oracle side, but I don't feel like implementing that and complicating the code. >From my point of view, oracle_fdw is good for reading, but not for bulk writes. Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com