Search Postgresql Archives

Re: Insert into on conflict, data size upto 3 billion records

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2/15/21 12:22 PM, Karthik K wrote:
yes, I'm using \copy to load the batch table,

with the new design that we are doing, we expect updates to be less going forward and more inserts, one of the target columns I'm updating is indexed, so I will drop the index and try it out, also from your suggestion above splitting the on conflict into insert and update is performant but in order to split the record into batches( low, high) I need to do a count of primary key on the batch tables to first split it into batches


I don't think you need to do a count per se. If you know the approximate range (or better, the min and max) in the incoming/batch data you can approximate the range values.





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux