Guido Niewerth <gniewerth@xxxxxxxxxxx> writes: > And this is the execution plan. It looks like it does a slow sequential scan where it´s able to do an index scan: > 2015-11-02 17:42:10 CET LOG: duration: 5195.673 ms plan: > Query Text: SELECT NOT EXISTS( SELECT 1 FROM custom_data WHERE key = old.key LIMIT 1 ) > Result (cost=0.09..0.10 rows=1 width=0) (actual time=5195.667..5195.667 rows=1 loops=1) > Output: (NOT $0) > Buffers: shared hit=34 read=351750 > InitPlan 1 (returns $0) > -> Limit (cost=0.00..0.09 rows=1 width=0) (actual time=5195.662..5195.662 rows=0 loops=1) > Output: (1) > Buffers: shared hit=34 read=351750 > -> Seq Scan on public.custom_data (cost=0.00..821325.76 rows=9390835 width=0) (actual time=5195.658..5195.658 rows=0 loops=1) > Output: 1 > Filter: (custom_data.key = $15) > Buffers: shared hit=34 read=351750 It looks like you're getting bit by an inaccurate estimate of what will be the quickest way to satisfy a LIMIT query. In this particular situation, I'd advise just dropping the LIMIT, as it contributes nothing useful. (If memory serves, 9.5 will actually ignore constant-LIMIT clauses inside EXISTS(), because people keep writing them even though they're useless. Earlier releases do not have that code though.) regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance