Search Postgresql Archives

Re: select for update & lock contention

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wednesday May 5 2004 10:42, Ed L. wrote:
> I think I'm seeing table-level lock contention in the following function
> when I have many different concurrent callers, each with mutually
> distinct values for $1.  Is there a way to reimplement this function
> using select-for-update (or equivalent) in order to get a row-level lock
> (and thus less contention) while maintaining the function interface?  The
> docs seem to suggest so, but it's not clear how to return the SETOF
> queued_item and also use select-for-update to get the row-level locks. 
> TIA.
>
> CREATE OR REPLACE FUNCTION getqueuedupdates (character)
>     RETURNS SETOF queued_item AS '
> DECLARE
>     rows record;
> BEGIN
>     FOR rows IN SELECT * FROM queued_item where subscriber=$1 LOOP
>         RETURN NEXT rows;
>         DELETE FROM queued_item WHERE key=rows.key;
>     END LOOP;
>     RETURN;
> END;'
> LANGUAGE plpgsql;

I should also mention what leads me to suspect lock contention.  First, the 
table is frequently vacuum analyzed, so I'm reasonably confident its not a 
planner stats issue.  Second, the table usually contains a small number of 
rows (tens to a couple hundred), so I reason its unlikely that a planner 
issue would slow it down much.  Third, I have put in "RAISE NOTICE" 
statements before and after each statement in the function, and can see the 
stalls of several seconds in the server log within the deleting loop.

So, I tried to get a less conflicting lock by using SELECT FOR UPDATE as 
follows with the "PERFORM" line (syntax corrections welcome;  contextual 
examples of how to do this were not plentiful on google or docs)...

CREATE OR REPLACE FUNCTION getqueuedupdates (character)
    RETURNS SETOF queued_item AS '
DECLARE
    rows record;
BEGIN
    -- obtain row-level locks...
    PERFORM * FROM queued_item WHERE subscriber = $1 FOR UPDATE OF 
queued_item;
    FOR rows IN SELECT * FROM queued_item where subscriber=$1 LOOP
        RETURN NEXT rows;
        DELETE FROM queued_item WHERE key=rows.key;
    END LOOP;
    RETURN;
END;'
LANGUAGE plpgsql;


Then I watched the locks with the following command, which I think basically 
shows which backends are locking which tables in which modes from which SQL 
statements:

while test 1; do psql -c "select now(), d.datname||':'||r.relname as table, 
l.transaction as xact, l.pid, l.mode, l.granted, 
pg_stat_get_backend_activity(S.backendid) AS sql from pg_locks l, pg_class 
r, pg_database d, (SELECT pg_stat_get_backend_idset() AS backendid) AS S 
where l.relation = r.oid and l.database = d.oid and d.datname = 'testdb' 
and r.relname = 'queued_item' and pg_stat_get_backend_pid(S.backendid) = 
l.pid and d.oid = pg_stat_get_backend_dbid(S.backendid)"; sleep 1; done


And with that command above, I notice several things leading to other 
questions:

1)  I can now see the lock modes for the function's delete statements have 
changed from RowExclusiveLock (a table-level lock?) to RowShareLock (a 
row-level lock, a good thing).

2)  The contention appears to continue.  I am now wondering if my assumption 
that RowShareLock would reduce contention over RowExclusiveLock is sound in 
such a case where you have multiple writers with each deleting a distinct 
set of rows (one set per subscriber)...?  What else could explain the 
delays I see in the delete loop?

3)  This table is populated by INSERT triggers on other tables; each of 
those inserts results in N triggered INSERTs into queued_item, one insert 
for each of N subscribers.  Given I also see the RowExclusiveLock mode from 
those inserts, I'm also wondering if those inserts aren't momentarily 
blocking the delete statements in the function above?  The 7.3.4 docs 
(http://www.postgresql.org/docs/7.3/static/explicit-locking.html#LOCKING-TABLES) 
seem to me to suggest RowExclusiveLock will not interfere with 
RowShareLock, but the modes seen above and the modes in the docs leave room 
for doubt.

One last thought:  This table does at times have a fairly high volume of 
rows being inserted and then quickly deleted (as much as 100 
inserts/deletes per second).  So the volume of change is large, but the 
number of rows present remains in flux within a range of maybe 0-1000.  
Vaccuum/analyze is done via autovacuum maybe every 5-20 minutes, so I guess 
the planner could be out of touch with actual index distributions.

Idears?

TIA.


---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

               http://archives.postgresql.org

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux