On Thu, May 5, 2011 at 1:54 PM, mirthcyy <mirthcyy@xxxxxxxxx> wrote: > hi group, > > we need help on one postgresql locking issue: > > Originally we have a table like below; > > id bigint not null nextval('xxx)', > customer_id int not null, > insert_record_date timestamp not null > ... > > so this id column is using a sequence number that applies to all > customers. And it's the primary key of the table > > recently we made the change to use id and customer_id as the composite > primary key. And id will have a sequence number within each > customer_id. So now we can't use sequence number any more. To insert > the data into the new table, we have to calculate the id for that > particular customer_id first like > > SELECT INTO v_ID COLESCSE(MAX("ID),0)+1 > WHERE "Customer_ID"=P_A_Customer_ID > > And then insert into this table with the id getting from the above > query. We also used: > > PERFORM pg_advisory_lock('"Schema"."TABLE"'::regclass::integer, > P_A_Customer_ID > > > Then we found out this lock doesn't work. If two transactions for the > same customer comes in very closely, the second one will try to get > the max(id) from the table while the first one is still working on the > insert and then it will be a problem. > > We thought of using SHARE UPDATE EXCLUSIVE but it will interfere with > regular maintenance like Vacuum and also it will block other updates > etc. > > Is there a good way to solve this issue? see here: http://www.varlena.com/GeneralBits/130.php for a lot of good ideas. max(id) is going to be problematic at best. Fundamentally, any non sequence approach is going to require some form of locking :(. merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general