Thanks for your response. This did start recently and it wasn't after a kernel update, but it was after we moved the db from Machine B to Machine A (which have slightly different kernel versions). However, the problem took about a week to show up after we moved from one machine to the other. Unfortunately, the problem only reappears after 15 mins once it occurs the first time. If it occurs again today I'll attach gdb to it and see whether it's stuck on a mutex.
Meetesh
On 5/22/06, Craig A. James
<cjames@xxxxxxxxxxxxxxxx> wrote:
Meetesh Karia wrote:
> Hi all,
>
> We've recently started having a problem where a query that normally
> executes in ~15ms starts to take upwards of 20s to complete. When the
> connection that ran query is returned to the connection pool, it appears
> as though a transaction is still in progress so the connection pool
> tries to cancel the transaction and close the connection. This fails
> and the connection is removed from the connection pool. At this point,
> the situation rapidly degrades and we run out of connections to the
> postgres server.
>
> An inspection of the pg_stat_activity table shows that practically every
> connection is running the above-mentioned query and some of those
> queries have been active for many minutes! We've looked at the pg_locks
> table as well and the only exclusive locks are on transactions that are
> open. All other locks are AccessShareLocks. Also, as far as we can
> tell (from looking at the Hibernate stats), every db session that is
> opened is closed.
>
> When this happens, if I kill one of the running postgres processes (just
> by picking the last process returned from "ps -ef | grep postgres"), the
> other queries will immediately finish and the system will respond.
> However, within 15 minutes, we'll be back in the same state as before.
> At that point, I've cycled Apache, Tomcat and Postgres and the system
> then seems to come back.
This sounds suspiciously like a question I asked a few weeks ago, on April 4. I have a process that just gets stuck. After some questions from various of the experts in this forum, I used gdb(1) to attach to one of the frozen Postgress backend processes, and here's what I found:
On 5/12/2006, I wrote:
> Thanks, good advice. You're absolutely right, it's stuck on a
> mutex. After doing what you suggest, I discovered that the query
> in progress is a user-written function (mine). When I log in as
> root, and use "gdb -p <pid>" to attach to the process, here's
> what I find. Notice the second function in the stack, a mutex
> lock:
>
> (gdb) bt
> #0 0x0087f7a2 in _dl_sysinfo_int80 () from /lib/ld- linux.so.2
> #1 0x0096cbfe in __lll_mutex_lock_wait () from /lib/tls/libc.so.6
> #2 0x008ff67b in _L_mutex_lock_3220 () from /lib/tls/libc.so.6
> #3 0x4f5fc1b4 in ?? ()
> #4 0x00dc5e64 in std::string::_Rep::_S_empty_rep_storage () from /usr/local/pgsql/lib/libchmoogle.so
> #5 0x009ffcf0 in ?? () from /usr/lib/libz.so.1
> #6 0xbfe71c04 in ?? ()
> #7 0xbfe71e50 in ?? ()
> #8 0xbfe71b78 in ?? ()
> #9 0x009f7019 in zcfree () from /usr/lib/libz.so.1
> #10 0x009f7019 in zcfree () from /usr/lib/libz.so.1
> #11 0x009f8b7c in inflateEnd () from /usr/lib/libz.so.1
> #12 0x00c670a2 in ~basic_unzip_streambuf (this=0xbfe71be0) at zipstreamimpl.h:332
> #13 0x00c60b61 in OpenBabel::OBConversion::Read (this=0x1, pOb=0xbfd923b8, pin=0xffffffea) at istream:115
> #14 0x00c60fd8 in OpenBabel::OBConversion::ReadString (this=0x8672b50, pOb=0xbfd923b8) at obconversion.cpp:780
> #15 0x00c19d69 in chmoogle_ichem_mol_alloc () at stl_construct.h:120
> #16 0x00c1a203 in chmoogle_ichem_normalize_parent () at stl_construct.h:120
> #17 0x00c1b172 in chmoogle_normalize_parent_sdf () at vector.tcc:243
> #18 0x0810ae4d in ExecMakeFunctionResult ()
> #19 0x0810de2e in ExecProject ()
> #20 0x08115972 in ExecResult ()
> #21 0x08109e01 in ExecProcNode ()
> #22 0x00000020 in ?? ()
> #23 0xbed4b340 in ?? ()
> #24 0xbf92d9a0 in ?? ()
> #25 0xbed4b0c0 in ?? ()
> #26 0x00000000 in ?? ()
>
> It looks to me like my code is trying to read the input parameter
> (a fairly long string, maybe 2K) from a buffer that was gzip'ed
> by Postgres for the trip between the client and server... somewhere
> along the way, a mutex gets set, and then ... it's stuck forever.
>
> ps(1) shows that this thread had been running for about 7 hours,
> and the job status showed that this function had been
> successfully called about 1 million times, before this mutex lock
> occurred.
This is not an issue that's been resolved. Nobody had ever seen this before. Tom Lane suggested it might be a libc/c++ bug, but unfortunately in my case this lockup occurs so rarely (every few days) that it will be very difficult to know if we've fixed the problem.
If gdb(1) reveals that your process is stuck in a mutex, then you might have a better chance testing this hypothesis, since your problem happens within 15 minutes or so.
Did this start recently, perhaps right after a kernel update?
Craig