Dear List,
I would like to improve seq scan performance. :-)
I have many cols in a table. I use only 1 col for search on it. It is
indexed with btree with text_pattern_ops. The search method is: r like
'%aaa%'
When I make another table with only this col values, the search time is
better when the data is cached. But wronger when the data isn't in cache.
I think the following:
- When there is a big table with many cols, the seq search is read all
cols not only searched.
- If we use an index with full values of a col, it is possible to seq
scan over the index is make better performance (lower io with smaller
data).
It is possible to make an index on the table, and make a seq index scan
on this values?
You can fake this (as a test) by creating a separate table with just your
column of interest and the row id (primary key), updated via triggers, and
seq scan this table. Seq scanning this small table should be fast. Of
course if you have several column conditions it will get more complex.
Note that btrees cannot optimize '%aaa%'. You could use trigrams.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance