Select and update statements are quite slow on a large table with more
than 600,000 rows. The table consists of 11 columns (nothing special).
The column "id" (int8) is primary key and has a btree index on it.
The following select statement takes nearly 500ms:
SELECT * FROM table WHERE id = 600000;
A prepending "EXPLAIN" to the statement reveals a seq scan:
EXPLAIN SELECT * FROM table WHERE id = 600000;
"Seq Scan on table (cost=0.00..15946.48 rows=2 width=74)"
" Filter: (id = 600000)"
I tried a full vacuum and a reindex, but had no effect. Why is
PostgreSQL not using the created index?
Or is there any other way to improve performance on this query?
The PostgreSQL installation is an out of the box installation with no
further optimization. The server is running SUSE Linux 9.1, kernel
2.6.4-52-smp. (Quad Xeon 2.8GHz, 1GB RAM)
SELECT version();
"PostgreSQL 7.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3
(SuSE Linux)"
Thanks for any hints,
Kjeld