On Fri, Sep 26, 2014 at 9:06 AM, Simon Riggs <simon@xxxxxxxxxxxxxxx> wrote: > If we can at least agree it is a problem, we can try to move forwards. Well that's a good question. I don't think we do and I think the reason why is because we haven't actually pinned down exactly what is the problem. The real problem here is that the ideal index for the query isn't there and Postgres is trying to choose between two inappropriatedoes not exist indexes where that decision is very very hard for it to make. If it guesses wrong in *either* direction it'll go very poorly. We can try to improve the frequency of getting the right decision but it'll never be 100% and even if it was it'll still not perform as well as the right index would have. I have seen plenty of applications where the slowdown was in the reverse direction -- where a query like "find the last login for the current user" was planned just as Josh is asking for by retrieving all the records for the user and sorting by login time and it caused large problems in production when some users had a disproportionately large number of records. The real solution for users is to create the compound index on both columns (or partial index in some cases). Trying to make do with an ordered scan or a index scan and sort are both going to cause problems in real world usage. In fact I think the real story here is that Postgres is doing a surprisingly good job at making do without the right index and that's causing users to get surprisingly far before they run into problems. That may not be the best thing for users in the long run but that's a problem that should be solved by better development tools to help users identify scalability problems early. -- greg -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance