Richard Huxton wrote:
Shakil Shaikh wrote:
Hi,
Consider the following scenario:
CREATE FUNCTION test(name)
select into cnt count(id) from items where owner = name;
--suppose someone inserts during this point? then next check will
succeed when it should not.
if (cnt < 10) then
insert into items values ('new item', name);
end;
end;
What is the best way to stop this function from entering too many
items in a concurrent context? I think a lock is needed, although I'm
not sure which would be most appropriate since the count requires the
whole table
Well, you can't use FOR UPDATE with aggregates, otherwise you could do:
SELECT into cnt count(id) FROM items WHERE owner = name FOR UPDATE;
So, you'd need:
PERFORM * FROM items WHERE owner = name FOR UPDATE;
That won't stop someone blindly inserting into items, but it will act as
an effective lock on calling test() concurrently.
The other options would be:
1. Lock the relevant row in the users table (not so good if you have an
items2 table with similar requirements)
2. Advisory locks at the application level
3. Check after an insert on the items table and raise an exception if
there are 11+ items.
I'd be tempted by #3 - assuming most of the time you won't breach this
limit.
#3 won't work unless the other transactions have all committed by the
time you do the check. It is guaranteed to fail at some point.
In cases like this, in the unusual case that I don't already have some
suitable row locked, I just pick a row in some table that makes sense to
lock. There would be nothing wrong with creating a table with rows that
exist solely for the purpose of locking. This is a (usually) better
version of option #2 above.
-Glen
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general