> > (2) seems fairly hard generically, since we'd have to keep track of > the tids returned from the IndexScan to allow us to switch to a > different plan and avoid re-issuing rows that we've already returned. > But maybe if we adapted the IndexScan plan type so that it adopted a > more page oriented approach internally, it could act like a > bitmapscan. Anyway, that would need some proof that it would work and > sounds like a fair task. > > (1) sounds more easily possible and plausible. At the moment we have > enable_indexscan = off. If we had something like > plan_cost_weight_indexscan = N, we could selectively increase the cost > of index scans so that they would be less likely to be selected. i.e. > plan_cost_weight_indexscan = 2 would mean an indexscan would need to > be half the cost of any other plan before it was selected. (parameter > name selected so it could apply to all parameter types). The reason to > apply this weighting would be to calculate "risk adjusted cost" not > just estimated cost. > > -- > Simon Riggs http://www.2ndQuadrant.com/ > PostgreSQL Development, 24x7 Support, Training & Services Another option would be for the bulk insert/update/delete to track the distribution stats as the operation progresses and if it detects that it is changing the distribution of data beyond a certain threshold it would update the pg stats accordingly. -- Matt Clarkson Catalyst.Net Limited -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance