>> How many cores do you have on that machine? Test if limiting number of simultaneous feeds, like bringing their number down to half of your normal connections has the same positive effect. << I am told 32 cores on a LINUX VM. The operators have tried limiting the number of threads. They feel that the number of connections is optimal. However, under the same conditions they noticed a sizable boost in performance if the same import was split into two successive imports which had shorter transactions. I am just looking to see if there is any reason to think that lock contention (or anything else) over longer vs. shorter single-row-write transactions under the same conditions might explain this. Carlo From: Igor Neyman [mailto:ineyman@xxxxxxxxxxxxxx] From: pgsql-performance-owner@xxxxxxxxxxxxxx [mailto:pgsql-performance-owner@xxxxxxxxxxxxxx] On Behalf Of Carlo We have a system which is constantly importing flat file data feeds into normalized tables in a DB warehouse over 10-20 connections. Each data feed row results in a single transaction of multiple single row writes to multiple normalized tables. The more columns in the feed row, the more write operations, longer the transaction. Operators are noticing that splitting a single feed of say – 100 columns – into two consecutive feeds of 50 columns improves performance dramatically. I am wondering whether the multi-threaded and very busy import environment causes non-linear performance degradation for longer transactions. Would the operators be advised to rewrite the feeds to result in more smaller transactions rather than fewer, longer ones? Carlo Ø over 10-20 connections How many cores do you have on that machine? Test if limiting number of simultaneous feeds, like bringing their number down to half of your normal connections has the same positive effect. Regards, Igor Neyman |