Tom Lane <tgl@xxxxxxxxxxxxx> wrote: > Any sane text search application is going to try to filter out > common words as stopwords; it's only the failure to do that that's > making this run slow. Imagine a large table with a GIN index on a tsvector. The user wants a particular document, and is sure four words are in it. One of them only appears in 100 documents. The other three each appear in about a third of the documents. Is it more sane to require the user to wait for a table scan or to make them wade through 100 rows rather than four? I'd rather have the index used for the selective test, and apply the remaining tests to the rows retrieved from the heap. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance