Hi,
I'm looking for suggestions on how I can improve the performance of the below merge statement, we have a batch process that batch load the data into the _batch tables using Postgres and the task is to update the main target tables if the record exists else into it, sometime these batch table could go up to 5 billion records. Here is the current scenario
target_table_main
has 700,070,247 records and is hash partitioned into 50 chunks, it has an index on logical_ts
and the batch table has 2,715,020,546 close to 3 billion records, so I'm dealing with a huge set of data so looking of doing this in the most efficient way.Thank you