> SELECT * FROM FullTextSearch WHERE content_tsv_gin @@ > plainto_tsquery('english', 'good'); > > It's slow (> 30 sec.) for some GB (27886 html files, originally 73 MB zipped). > The planner obviously always chooses table scan Hello, A probable reason for the time difference is the cost for decompressing toasted content. At lest in 8.3, the planner was not good at estimating it. I'm getting better overall performances since I've stopped collect statistic on tsvectors. An alternative would have been to disallow compression on them. I'm aware this is a drastic way and would not recommend it without testing. The benefit may depend on the type of data you are indexing. In our use case these are error logs with many java stack traces, hence with many lexemes poorly discriminative. see: http://www.postgresql.org/message-id/27953.1329434125@xxxxxxxxxxxxx as a comment on http://www.postgresql.org/message-id/C4DAC901169B624F933534A26ED7DF310861B363@xxxxxxxxxxxxxxxxxxxxxxxxxx regards, Marc Mamin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance