Search Postgresql Archives

Re: using a postgres table as a multi-writer multi-updater queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/23/2015 4:41 AM, Chris Withers wrote:
Hi All,

I wondered if any of you could recommend best practices for using a
postgres table as a queue. Roughly speaking, 100-200 workers will vomit
rows and rates of a few hundres per second into the table leaving the
status as new and then as many workers as needed to keep up with the
load will plough through the queue changing the status to something
other than new.

My naive implementation would be something along the lines of:

CREATE TABLE event (
     ts        timestamp,
     event     char(40),
     status    char(10),
     CONSTRAINT pkey PRIMARY KEY(ts, event)
);


...with writers doing INSERT or COPY to get data into the table and
readers doing something like:

SELECT FOR UPDATE * FROM event WHERE status='new' LIMIT 1000;

...so, grabbing batches of 1,000, working on them and then setting their
status.

But, am I correct in thinking that SELECT FOR UPDATE will not prevent
multiple workers selecting the same rows?

Anyway, is this approach reasonable? If so, what tweaks/optimisations
should I be looking to make?

If it's totally wrong, how should I be looking to approach the problem?

cheers,

Chris

Have you tried Redis?  Its really good at that sort of thing.

-Andy


--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux