> It is not because of the locks. There is only running transaction (and it > got every locks). Not sure why the backend stucks there, seems it doesn't > aware of the broken client. In normal situations, if you killed a client, > then the server will print something like "could not receive data from > client" then exit the backend. > > If you can attach to the problematic postgres process, that would be more > helpful to identify the problem. My ODBC client uploads 9 MB binary file to server bytea field using 1 MB binary hex encoded blocks using code something like: START TRANSACTION; DELETE FROM localfil WHERE LOWER(filename)='alguss.exe'; INSERT INTO localfil (filename,filedirect,BlockNumber,lastchange,contents) values( 'alguss.exe', 'algus', 1,CURRENT_TIMESTAMP, decode( '1mbbinarystring', 'hex') ); INSERT INTO localfil (filename,filedirect,BlockNumber,lastchange,contents) values( 'alguss.exe', 'algus', 2,CURRENT_TIMESTAMP, decode( '1mbbinarystring', 'hex') ); .... etc 8 insert statemens COMMIT; insert command takes a long time. I terminated the client application from Windows XP client manager during insert command. At next re-run application and pgadmin both hang in DELETE FROM line I think I can reproduce this. Postgres server is in FreeBSD. I can require server admin to do something after the problem occurs. What should I do after reproducing the problem ? Table structure is CREATE TABLE public.localfil ( Filename CHAR(50), FileDirect CHAR(8), BlockNumber INTEGER, lastchange timestamp NOT NULL, Contents BYTEA NOT NULL, PRIMARY KEY ( Filename, FileDirect, BlockNumber ) ); CREATE TRIGGER localfil_trig BEFORE INSERT OR UPDATE ON localfil EXECUTE PROCEDURE setlastchange(); and table is empty.