Tomas Vondra wrote:
When I force it via "set enable_seqscan to off", the index scan
takes about 0.1 msec (as reported by explain analyze), whereas
with the default, it chooses a seq. scan, for a total execution
time around 10 msec!! (yes: 100 times slower!). The table has
20 thousand records, and the WHERE part of the query uses one
field that is part of the primary key (as in, the primary key
is the combination of field1,field2, and the query involves a
where field1=1 and some_other_field=2). I don't think I'm doing
something "wrong", and I find no reason not to expect the query
planner to choose an index scan.
1) I'm missing a very important part - information about the settings
in postgresql.conf, especially effective cache size, random page
cost, etc. What hw are you using (RAM size, disk speed etc.)?
show all; responds with (I'm leaving only the ones I think could be
the relevant ones):
client_encoding | SQL_ASCII
commit_delay | 0
commit_siblings | 5
cpu_index_tuple_cost | 0.001
cpu_operator_cost | 0.0025
cpu_tuple_cost | 0.01
deadlock_timeout | 1000
effective_cache_size | 1000
enable_hashagg | on
enable_hashjoin | on
enable_indexscan | on
enable_mergejoin | on
enable_nestloop | on
enable_seqscan | on
enable_sort | on
enable_tidscan | on
from_collapse_limit | 8
fsync | on
geqo | on
geqo_effort | 1
geqo_generations | 0
geqo_pool_size | 0
geqo_selection_bias | 2
geqo_threshold | 11
join_collapse_limit | 8
max_connections | 100
max_expr_depth | 10000
max_files_per_process | 1000
max_fsm_pages | 20000
max_fsm_relations | 1000
max_locks_per_transaction | 64
pre_auth_delay | 0
random_page_cost | 4
regex_flavor | advanced
server_encoding | SQL_ASCII
server_version | 7.4.5
shared_buffers | 62000
sort_mem | 1024
statement_timeout | 0
vacuum_mem | 8192
virtual_host | unset
wal_buffers | 8
wal_debug | 0
wal_sync_method | fdatasync
Any obvious red flag on these?
The HW/SW is: Fedora Core 2 running on a P4 3GHz HT, with 1GB of
RAM and 120GB SATA drive.
2) Another thing I'm missing is enough information about the table
and the query itself. What is the execution plan used? Was the table
modified / vacuumed / analyzed recently?
I vacuum analyze the entire DB daily, via a cron entry (at 4AM).
But I think the problem is that this particular table had not been
vacuum analyzed after having inserted the 20000 records (the
query planner was giving me seq. scan when the table had about
a dozen records --- and seq. scan was, indeed, 10 times faster;
as a test, to make sure that the query planner would do the right
thing when the amount of records was high, I inserted 20000
records, and tried again --- now the seq. scan was 100 times
slower, but it was still chosen (at that point was that I did a
search through the archives and then posted the question).
But now, after reading the replies, I did a vacuum analyze for
this table, and now the query planner is choosing the Index
scan.
Without these information it's completely possible the postgresql is
using invalid values and thus generating suboptimal execution plan.
There are many cases when the sequential scan is better (faster, does
less I/O etc.) than the index scan.
But as the tests yesterday revealed, this was not the case
(explain analyze was reporting execution times showing index
scan 100 times faster!)
For example if the table has grown and was not analyzed recently
Ok, now I'm quite sure that this is, indeed, the case (as
you can see from my description above)
postgresql may still believe it's small and thus uses the sequential
scan. Or maybe the effective case size is set improperly (too low in
this case) thus the postgresql thinks just a small fraction of data is
cached, which means a lot of scattered reads in case of the index -
that's slower than sequential reads.
But these values are all defaults (I think I played with the shared
buffers size, following some guidelines I read in the PostgreSQL
documentation), which is why I felt that I was not doing something
"wrong" which would be at fault for making the query planner do
the wrong thing (well, nothing wrong in the query and the table
definition --- there was indeed something wrong on my side).
There are many such cases - the common belief that index scan is always
better than the sequential scan is incorrect.
No, you can rest assured that this was not the case with me! I
do understand that basic notion that, for example, the bubble
sort is faster than most NlogN sort algorithms if you have an
array of 3 or 4 elements (or depending on the nature of the
data, if they are always close to sorted, etc.)
One thing is that, the cases where seq. scan are faster tend
to be when there aren't many records, and therefore, the
execution times are low anyway --- this seems like an argument
in favor of being biased in favor of index scans; but yes, I
guess if one has to do several thousands queries like those,
then the fact that the query takes 0.5 ms instead of 1 ms
does count.
But most of these cases
can be identified using explain analyze output (which is missing in your
post).
I can't reproduce now teh seq. scan, and given that I seem to
have found the reason for the unexpected result, there probably
is no point in showing you something that exhibits no problem
to be debugged.
The data supplied by you are not a 'proof' the index scan is better than
sequential scan in this case, as the data might be cached due to
previous queries.
No, that was not the case (another notion that is quite clear
in my mind :-)). I repeated ll queries no less than 10 times ---
then alternating with enable_seqscan on and off, etc.). The times
I supplied seemed to be the "asymptotic" values, once things had
been cached as much as they would.
The port to 8.x might help
Given this, and some of the other replies, I guess I'll move
this way up in my list of top-priority things to do.
2) try to use ANALYZE on the table and run the queries again
Bingo! :-)
3) review the settings in postgresql - a nice starting point is here
http://www.powerpostgresql.com/PerfList
I'll definitely take a look at this, so that I can figure thigns
out better if something similar arises in the future.
Thanks,
Carlos
--