At 08:16 19/11/2004 -0600, Bruno Wolff III wrote: >> The table currently contains just over 10000 elements. So 238 rows is a >> small part of it. > No, small is typically less than 1%. This depends on the size of the rows > and how much better accessing disk blocks sequentially is in your > enviroment and the size of your cache. PG runs on an old computer (200Mhz, 64MB ram); this is probably part of my "problem". With modern hard drives, sequential scan could be faster. > Because your table is so small it will probably all be cached after being > read through once, so you may want to tune your config settings to > say than random disk access costs only a little more than sequential > access. I think that the indexes are all cached after a while, but I doubt that the tables can. > However, you need to be careful if your table is going to grow > a lot larger. The whole database is quite large (that is for the computer it is on). >> Since the table is still growing, and the amount of rows in the reply of >> the query is quite uniform (it's not dependant on the size of the table), I >> hope that the statistics will evolve in a state that will force the use of >> the index. > Index scans aren't always faster than sequential scans. I know that, but I've some comparisons with other queries. And someone advised me to try "set enable_seqscan=off;". It takes 50-60% (after checking right now) less to use the index. Unfortunately I can't use this setting, the query being part of a larger query (joins), and the time gained on this particular index is partially lost on the joins. -- Marc ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster