Dorian Hoxha <dorian.hoxha@xxxxxxxxx> writes: > I am trying to use postgresql as a queue for long-jobs (max ~4 hours) using > advisory_locks. I can't separate the long-job into sub-jobs. > 1. At ultimate-best-case scenario there will be ~100 workers, so no > web-scale performance required. > Is there a problem with 100 open sessions (behind a connection pooler?) > for hours like in my case? > The session will commit the progress of the job, but will stay opened > and hold the advisory_lock till the job is done or it expires (look 2). Sitting on an open transaction for hours would be a bad idea. An idle session isn't a problem though. So as long as you use session locks not transaction locks, this should work fine. > 2. Is it possible to set a time limit to auto-close an opened session > that hasn't made a query in a while ? > So if a worker fails,and the session time-outs, postgresql/pgbouncer > will close the session and release the lock ? There's no such thing in core postgres. Perhaps pgbouncer or another connection pooler has such a feature; though I'm not sure if a pooler might not cause problems of its own (it probably won't realize that the connections aren't interchangeable if they're holding session-level advisory locks). I'm a bit confused though as to why you'd want this for your application. Have you got an a-priori hard limit as to how long your "long jobs" could take? Do you really want to kill their database connections if they take a bit longer than you thought? Seems like as long as the client stays connected, it'd be better to assume it's still working. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general