Greg Williamson wrote: > Jesper -- > > I apologize for top-quoting -- a challenged reader. > > This doesn't directly address your question, but I can't help but > notice that the estimates for rows is _wildly_ off the actual number > in each and every query. How often / recently have you run ANALYZE on > this table ? It is actually rather accurate, what you see in the explain analyze is the "limit" number getting in.. where the inner "rows" estiemate is for the where clause+filter. > Are the timing results consistent over several runs ? It is possible > that caching effects are entering into the time results. Yes, they are very consistent. It have subsequently found out that it depends on the amount of "workers" doing it in parallel. I seem to top at around 12 processes. I think I need to rewrite the message-queue stuff in a way that can take advantage of some stored procedures instead. Currenly it picks out the "top X" randomize it in the client picks one and tries to "grab" it. .. and over again if it fails. When the select top X begins to consume signifcant time it self the process bites itself and gradually gets worse. The workload for the individual jobs are "small". ~1-2s. -- Jesper -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance