On Thu, 2006-08-31 at 11:04 -0800, Matthew Schumacher wrote: > Tom Lane wrote: > > Matthew Schumacher <matt.s@xxxxxxxxxxxxx> writes: > >> I have been having performance problems with my DB so this morning I > >> added some config to log queries that take more than 250ms. The result > >> is surprising because some queries will take as long as 10 seconds, but > >> then you do a explain analyze on them they show that indexes are being > >> used and they run very fast. > > > > Is it possible that it's not directly that query's fault? For instance > > it could be blocked by a lock held by some other transaction. I can't > > unfortunately think of any very nice way to deduce this from log entries > > ... you'd have to catch it in the act and look into pg_locks to find out > > who's the perpetrator. > > > > regards, tom lane > > This does help me try to figure out where the problem is. The proc in > question inserts in a very large table, and updates another large table. > Since postgres puts each proc in it's own transaction I'm thinking the > problem may be the database locking these large tables while this proc > is called concurrently. > > In order to understand this better I need to know how postgres locking > works and when locks are used. Do you know of any documentation that I > can read that explains this? http://www.postgresql.org/docs/8.1/static/mvcc.html In the "Explicit Locking" section it details the locks acquired by UPDATE, etc. >From what you described, I would not expect many locking problems. Are there any other types of queries you run that may cause a lock? Do you run periodic "VACUUM FULL" or something? "VACUUM FULL" causes a full table lock, and is usually not necessary. If so, try running just "VACUUM" without "FULL". Regards, Jeff Davis