On 30 September 2014 at 10:25, Simon Riggs <simon@xxxxxxxxxxxxxxx> wrote: > On 30 September 2014 00:00, Tom Lane <tgl@xxxxxxxxxxxxx> wrote: >> The existing cost estimation >> code effectively assumes that they're perfectly uniformly distributed; >> which is a good average-case assumption but can be horribly wrong in >> the worst case. > > Agreed. This is the main observation from which we can work. > >> If we could settle on some other model for the probable distribution >> of the matching tuples, we could adjust the cost estimates for LIMIT >> accordingly. I have not enough statistics background to know what a >> realistic alternative would be. > > I'm not sure that the correlation alone is sufficient to be able to do > that. We'd need to estimate where the values looked for are likely to > be wrt other values, then increase estimate accordingly. That sounds > like a lot of pushups grovelling through quals and comparing against > stats. So my thinking is actually to rule that out, unless you've some > ideas for how to do that? > >> Another possibility is to still assume a uniform distribution but estimate >> for, say, a 90% probability instead of 50% probability that we'll find >> enough tuples after scanning X amount of the table. Again, I'm not too >> sure what that translates to in terms of the actual math, but it sounds >> like something a statistics person could do in their sleep. The problem is one of risk. Whatever distribution we use, it will be wrong in some cases and good in others. For example, if we look at "10 Most Recent Calls" for a user, then frequent users would have one distribution, infrequent users another. So we have multiple distributions in the same data. We just can't hold enough information to make sense of this. Think about how much data needs to be scanned if the user has only done 9 calls. What I've done in the past is to rewrite the query in different ways to force different plans, then call each plan depending upon the user characteristics. This is can also be done with hints, in a more ignorant way. >> I do not think we should estimate for the worst case though. If we do, >> we'll hear cries of anguish from a lot of people, including many of the >> same ones complaining now, because the planner stopped picking fast-start >> plans even for cases where they are orders of magnitude faster than the >> alternatives. > > Fast start plans still make sense when performing an IndexScan with no > filter conditions. Those types of plan should not be changed from > current costing - they are accurate, good and very important because > of their frequency in real workloads. > > What I think we are seeing is Ordered plans being selected too often > in preference to Sorted plans when we make selectivity or stats > errors. As well as data distributions that aren't correctly described > by the statistics causing much longer execution times. > > Here are some plan selection strategies > > * Cost based - attempt to exactly calculate the cost based upon > existing stats - increase the complexity of cost calc to cover other > aspects. Even if we do that, these may not be that helpful in covering > the cases where the stats turn out to be wrong. > > * Risk based - A risk adjusted viewpoint would be that we should treat > the cost as mid-way between the best and the worst. The worst is > clearly scanning (100% - N) of the tuples, the best is just N tuples. > So we should be costing scans with excess filter conditions as a (100% > Scan)/2, no matter the conditions, based purely upon risk. > > * Simplified heuristic - deselect ordered plans when they are driven > from scans without quals or indexscans with filters, since the risk > adjusted cost is likely to be higher than the sorted cost. Inspecting > the plan tree for this could be quite costly, so would only be done > when the total cost is $high, prior to it being adjusted by LIMIT. > > > In terms of practical steps... I suggest the following: > > * Implement enable_orderedscan = on (default) | off. A switch to allow > plans to de-select ordered plans, so we can more easily see the > effects of such plans in the wild. > > * Code heuristic approach - I can see where to add my heuristic in the > grouping planner. So we just need to do a left? deep search of the > plan tree looking for scans of the appropriate type and bail out if we > find one. After looking at this for some time I now have a patch that solves this. It relies on the observation that index scans with no bounded quals don't play nicely with LIMIT. The solution relies upon the point that LIMIT does not reduce the startup cost of plans, only the total cost. So we can solve the problem by keeping the total cost estimate, just move some of that into startup cost so LIMIT does not reduce costs as much as before. It's a simple patch, but it solves the test cases I know about and does almost nothing to planning time. I tried much less subtle approaches involving direct prevention of LIMIT pushdown but the code was much too complex for my liking. - - - The only other practical way to do this would be to have a LimitPlanDisaster node LimitPlanDisaster -> PreSortedPath -> CheapestPath The PlanDisaster node would read the PreSortedPath for costlimit C After we reach the limit we switch to the CheapestPath and execute that instead for the remainder of the Limit. Or we could do time limits, just harder to make that make sense. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Attachment:
avoid_limit_pushdown.v4.patch
Description: Binary data
-- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance