Phoenix Kiula <phoenix.kiula@xxxxxxxxx> writes: > Index Scan using new_idx_testimonials_userid on testimonials > (cost=0.00..157.78 rows=1 width=9) (actual time=8809.715..8809.715 > rows=0 loops=1) > Index Cond: ((user_id)::text = 'superman'::text) > Filter: ((title_encrypted)::text = 'b333dc1b0992cb8c70b58a418211389a'::text) > Total runtime: 8809.750 ms This is using the index to fetch the rows that match user_id = 'superman', and then testing each fetched row to see if it has the desired value of title_encrypted. The fact that hardly any rows pass the filter test tells nearly nothing about how long this should be expected to run. The rather high estimated cost suggests that the planner thinks there are several dozen rows matching the index condition, and the actual runtime suggests that there are actually hundred or thousands of 'em. If so, your problem is that you need a different index. I'd bet on an index on title_encrypted being more useful for this query than the one on user_id; or you could experiment with a two-column index. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general