Hi, On March 4, 2022 10:42:39 AM PST, Tom Lane <tgl@xxxxxxxxxxxxx> wrote: >aditya desai <admad123@xxxxxxxxx> writes: >> One of the service layer app is inserting Millions of records in a table >> but one row at a time. Although COPY is the fastest way to import a file in >> a table. Application has a requirement of processing a row and inserting it >> into a table. Is there any way this INSERT can be tuned by increasing >> parameters? It is taking almost 10 hours for just 2.2 million rows in a >> table. Table does not have any indexes or triggers. > >Using a prepared statement for the INSERT would help a little bit. >What would help more, if you don't expect any insertion failures, >is to group multiple inserts per transaction (ie put BEGIN ... COMMIT >around each batch of 100 or 1000 or so insertions). There's not >going to be any magic bullet that lets you get away without changing >the app, though. > >It's quite possible that network round trip costs are a big chunk of your >problem, in which case physically grouping multiple rows into each INSERT >command (... or COPY ...) is the only way to fix it. But I'd start with >trying to reduce the transaction commit overhead. Pipelining could also help. -- Sent from my Android device with K-9 Mail. Please excuse my brevity.