Re: how to make duplicate finding query faster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Dec 30, 2020, at 6:24 AM, Sachin Kumar <sachinkumaras@xxxxxxxxx> wrote:
> 
> Yes, I am checking one by one because my goal is to fail the whole upload if there is any duplicate entry and to inform the user that they have a duplicate entry in the file.

That's not what I said, though. If you want to fail the whole copy, then you don't have to check one by one, just try the copy--assuming you have the correct constraints in place.

Unless you want to tell the user *which* rows are duplicates, in which case you can try a variant on my prior suggestion, copy into a temp table, use a join to find duplicates...





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux