Hello, I've got following two-tables events queue implementation (general idea is that multiple writers put events, while multiple readers retrieve and handle them in order): Table events: ev_id: SERIAL ev_data: bytea -- serialized event details Table eventsconsumers: con_name: text UNIQUE -- consumer name con_lastevent: integer Consumers issue: SELECT * events WHERE ev_id > "con_lastevent" LIMIT XX to fetch new events[1] Once event is handled (or ignored) by a reader he sets con_lastevent to handled ev_id. Obviuos problem with this simple implementation is following race condition: -- Writer1 BEGIN; INSERT INTO events.... (ev_id = 1) -- Writer2 BEGIN; INSERT INTO events... (ev_id = 2) COMMIT; --Reader1 SELECT * FROM events WHERE ev_id > 0; -- first round UPDATE eventsconsumers SET con_lastevent = 2 WHERE con_name = 'Reader1'; --Writer1 COMMIT; -- Reader1 missed ev_id = 1 I've got two ideas to solve it: 1- BEGIN; LOCK TABLE events IN ACCESS EXCLUSIVE MODE; INSERT INTO events...; COMMIT; Doesn't seem too bright: events might get posted in the begining of (rather long) transaction and there are many active writers. 2- <while not success>: { BEGIN; LOCK TABLE events IN SHARE MODE NOWAIT; -- block writers } SELECT * FROM events.... COMMIT; Intuitively I believe backing-off with NOWAIT is better (since readers performance is not that important). Could You suggest better ways to solve this problem?, maybe I'm missing something obviuos here. Thanks, Tomas [1]{LISTEN/NOTIFY is used for "new-event-arrived" notifications} ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match