Search Postgresql Archives

Re: Moving delta data faster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Sat, Apr 6, 2024 at 10:25 PM Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote:

Your original problem description was:

"Then subsequently these rows will be inserted/updated based on the
delta number of rows that got inserted/updated in the source database.
In some cases these changed data can flow multiple times per day to the
downstream i.e. postgres database and in other cases once daily."

If the above is not a hard rule, then yes up to some point just
replacing the data in mass would be the simplest/fastest method. You
could cut a step out by doing something like TRUNCATE target_tab and
then COPY target_tab FROM 'source.csv' bypassing the INSERT INTO source_tab.

Yes, actually i didn't realize that truncate table transactional/online here in postgres. In other databases like Oracle its downtime for the read queries on the target table, as data will be vanished from the target table post truncate(until the data load happens) and those are auto commit. Thanks Veem for sharing that  option. 

 I also think that truncate will be faster if the changes/delta is large , but if its handful of rows like <5%of the rows in the table then Upsert/Merge will be better performant. And also the down side of the truncate option is,  it does ask to bring/export all the data from source to the S3 file which may take longer as compared to bringing just the delta records. Correct me if I'm wrong.

However I am still not able to understand why the upsert is less performant than merge, could you throw some light on this please?


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux