David Popiashvili wrote: > I have database with few hundred millions of rows. I'm running the following query: > > select * from "Payments" as p > inner join "PaymentOrders" as po > on po."Id" = p."PaymentOrderId" > inner join "Users" as u > On u."Id" = po."UserId" > INNER JOIN "Roles" as r > on u."RoleId" = r."Id" > Where r."Name" = 'Moses' > LIMIT 1000 > When the where clause finds a match in database, I get the result in several milliseconds, but if I > modify the query and specify a non-existent r."Name" in where clause, it takes too much time to > complete. I guess that PostgreSQL is doing a sequential scan on the Payments table (which contains the > most rows), comparing each row one by one. > Isn't postgresql smart enough to check first if Roles table contains any row with Name 'Moses'? > > Roles table contains only 15 row, while Payments contains ~350 million. > > I'm running PostgreSQL 9.2.1. > Here'e explain analyse results: http://explain.depesz.com/s/7e7 Can you also show the plan for the good case? Yours, Laurenz Albe -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance