I am trying to implement an incremental search engine. The service should start searching when the user has typed at least 3 characters. I am thinking of using the following strategy: a) Create a function string_to_three_char_tsvector(str text) that would generate the tsvector composed of the three-letter lexemes that begin all the admissible words within the parameter str. b) Using this function, create an indexed tsvector column: three_char_index. c) Given the query string query_string (assume query_string containing at least 3 characters): SELECT * FROM mytable, plainto_tsquery((string_to_three_char_tsvector(query_string))::text) AS query WHERE three_char_index @@ query AND text_field LIKE '%' || str || '%'; Once I've narrowed the field of possibilities down to the correct 3-letter lexemes, there are fewer than 100 lines to search through with LIKE. I could even repeat the exercise with 4-letter lexemes if these numbers were to grow or if I needed the extra boost in performance. So, two questions to postgres/tsearch experts: 1) Does that seem like a decent overall strategy? 2) About the function string_to_three_char_tsvector(text), I cannot think of an elegant way of writing this. Is it possible to do better than the following: str => cast to tsvector => cast to text => for each lexeme-string, take first-three-char substring => concat back together => cast to tsvector Is there a nice way of performing the middle operation? Like splitting the string to an array...