Tom Lane wrote: > Huh, so on a percentage basis the Limit-node overhead is actually > pretty significant, at least for a trivial seqscan plan like this > case. (This is probably about the worst-case scenario, really, > since it's tough to beat a simple seqscan for cost-per-emitted- > row. Also I gather you're not actually transmitting any data to > the client ...) Right, I was trying to isolate the cost, and in a more complex query, or with results streaming back, that could easily be lost in the noise. Assuming that the setup time for the node is trivial compared to filtering 10,000 rows, the time per row which passes through the limit node seems to be (very roughly) 140 nanoseconds on an i7. I don't know whether that will vary based on the number or types of columns. I just tried with returning the results rather than running EXPLAIN ANALYZE, and any difference was lost in the noise with only five samples each way. I wonder how much of the difference with EXPLAIN ANALYZE might have been from the additional time checking. Maybe on a normal run the difference would be less significant. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance