I know that it's much faster (for us) to run many smaller queries than one large query, and I think that it's primarily because of your reason a. Most of our problems come from Pg misunderstanding the results of a join and making a bad plan decision. Batching dramatically reduces the liklihood of this. -Mark
Show us the plan (output of EXPLAIN ANALYZE), along with detailed info about the table (number of rows / pages) and environment (amount of RAM, etc.). Without these information it's impossible to tell which of the choices is right.
In my experience the planner is a very good piece of software, but if you feed him with bad information about the environment / data in the beginning, you can hardly expect good plans.
Occasionally I provide consultancy to developers having problems with PostgreSQL, and while most of the time (say 70% of the time) the problems are caused by mistakes in SQL or incorrect design of the system, problems with proper settings of the PostgreSQL are quite often. I really don't know if you use the default values or if you have tuned the settings (and how), but it's hard to tell from your original post.
For example if you don't set the work_mem according to your settings, this may result in on-disk sorting even if you have plenty of free memory. Or if you have fast drives, the default cost settings may produce bad plans (index scan may seem too expensive), etc. And of course you may have data with complicated statistical properties, and the default level of details may not be sufficient (try increase it with SET STATISTICS for the column).
Anyway - there may be glitch / corner case in the planner of course, but it's hard to tell without the EXPLAIN ANALYZE output.
regards Tomas -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance