Hello, Using PostgreSQL 9.6.9 (will be PostgreSQL 10.4 in a month) running on Debian Stretch on a VPS server. I am collecting customer data from distributed locations. There will be around 50-70 locations uploading data. Customer data is on local FirebirdSQL database and there are about 15 tables synced to PostgreSQL in total. I am using direct SSL connection to PostgreSQL to do delete/insert/update of necessary records in that order. Everything is in a single transaction and that transaction is rolled back if anything goes wrong. Very first run of the application upload may take more than an hour depending on internet connection speed. Fast internet connections can complete relatively large initial transfer under 10 minutes. Consecutive runs are pretty fast and completes under a minute even on slow connections. What I observe during my tests is that if I intentionally drop internet connection during any stage of data transfer (that is mostly while inserting to tables) application gives error and stop. For next sync operation (which runs every 5 mins) gets "no wait lock" error and exit without doing anything. That lock stage roughly stays for 1-2 hours or more. I use Read Committed transaction isolation level and default action is rollback on close. I wonder if I can overcome that long lock waits as this is not desired which will break data sync for a long period. Thanks & regards Ertan Küçükoğlu