On Thu, Nov 13, 2008 at 01:56:11PM +0100, Markus Wollny wrote: > Sam Mason wrote: > > You may have some luck with increasing the statistics target on the > > entry_id and last_updated columns and re-ANALYZING the table. Then > > again, the fact that it thinks it's only going to get a single row > > back when it searches for the entity_id suggests that it's all a bit > > confused! > > Thank you for that suggestion. Increasing the statistics target on > entity_id from the default 10 to 30 and re-analyzing did the trick: Even higher may be good for other entities; it thinks it's getting 103 rows back for this entity, whereas infact it only gets 3 back. Or is, on average, 103 a reasonable guess? > "Limit (cost=340.75..340.75 rows=1 width=12) (actual time=0.084..0.085 rows=1 loops=1)" > " -> Sort (cost=340.75..341.00 rows=103 width=12) (actual time=0.081..0.081 rows=1 loops=1)" > " Sort Key: last_updated" > " -> Index Scan using idx_image_relation_entity_id on image_relation (cost=0.00..337.30 rows=103 width=12) (actual time=0.059..0.065 rows=3 loops=1)" > " Index Cond: (entity_id = 69560)" > "Total runtime: 0.121 ms" A target over 100 will change the way it does the stats and may produce a better fit; try the query with a few different entities (i.e. ones where you know you've got many rows in the table, and ones where you've only got one or two) and see what numbers it comes back with. The smaller the target is, the faster the queries are planned and larger targets should allow the planner to cope with more uneven datasets. If the distribution is reasonably uniform you should be able to get away with low targets, less even distributions normally require larger targets. Sam -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general