[25751]:CONTEXT: while locking tuple (1,17) in relation "[table_name]"
[25751]:LOG: process 25751 acquired ShareLock on transaction 114953443 after 4756.967 ms
Hi,
The log you expose doesn’t mean a dead lock! It comes from setting deadlock_timeout parameter. Its value is 1 second by default. When a session waits for a lock more than this threshold then a test to see if we are in a deadlock condition is triggered. It is simply that we are waiting to obtain a lock for longer that the value of deadlock_timeout. Any way it could be interpreted as a performance problem when we are frequently waiting more than 1 sec for a lock…
Regards
Michel SALAIS
De : Edwin UY <edwin.uy@xxxxxxxxx>
Envoyé : lundi 11 novembre 2024 07:45
À : Tom Lane <tgl@xxxxxxxxxxxxx>
Cc : David G. Johnston <david.g.johnston@xxxxxxxxx>; pgsql-admin@xxxxxxxxxxxxxxxxxxxx
Objet : Re: How to check if session is a hung thread/session
Thanks Tom,
OK, I've decided to 'painfully' look at the PostgreSQL RDS logs and it is showing something like below.
There seems to be a locking/deadlock issue of some sort somewhere.
I have checked the other days' log prior to the patching and these appear to be a 'normal' occurrence for this database and it wasn't affecting the application nonetheless.
After the patching, it starts affecting the application. Not sure what else I can check on the Aurora PostgreSQL RDS end. I may request them to restart the app server.
[25751]:LOG: process 25751 still waiting for ShareLock on transaction 114953443 after 1000.054 ms
[25751]:DETAIL: Process holding the lock: 22297. Wait queue: 25751.
[25751]:CONTEXT: while locking tuple (1,17) in relation "[table_name]"
[25751]:STATEMENT: [SQL_STATEMENT] for update
[25751]:LOG: process 25751 acquired ShareLock on transaction 114953443 after 4756.967 ms
[25751]:CONTEXT: while locking tuple (1,17) in relation " [table_name] "
On Mon, Nov 11, 2024 at 5:45 PM Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
Edwin UY <edwin.uy@xxxxxxxxx> writes:
> I thought it could be the backend has sent something back to the client but
> it never received it and it just kept on doing the same at some intervals.
Your pg_stat_activity output shows the backend is idle, meaning it's
waiting for a client command. While the session has been around for
days, we can see the last client command was about 47 minutes ago,
from your "now() - pg_stat_activity.query_start AS duration" column.
I see no reason to think there is anything interesting here at all,
except for a client that is sitting doing nothing for long periods.
regards, tom lane