We extracted data from Oracle to csv first, already convert schema objects from Oracle to Postgresql too. Then use COPY from csv to Postgres. The point is about the 2 options to how to make the data load fast, pg_dump only used to dump metadata in Postgres to rebuild index and recreate constraints.
The questions is instead of drop index and create index, we check update pg_index set indisready=false and reindex again after load. From: Jeff Janes <jeff.janes@xxxxxxxxx> On Fri, Jun 17, 2022 at 1:34 AM James Pang (chaolpan) <chaolpan@xxxxxxxxx> wrote:
Where did this idea come from? This is likely to destroy your database.
pg_dump doesn't run against Oracle, so where is the thing you are running pg_dump against coming from? If you already have a fleshed out schema in PostgreSQL, you should dump the sections separately (with --section=pre-data and --section=post-data) to get the commands to build the objects which should be run before and after the data is
loaded. Cheers, Jeff |