On 3/11/15 11:15 AM, Tom Lane wrote:
Gunnlaugur Thor Briem <gunnlaugur@xxxxxxxxx> writes:
Yes, I think that's it: I've just realized that immediately prior to the
INSERT, in the same transaction, an unfiltered DELETE has been issued; i.e.
the whole table is being rewritten. Then the INSERT is issued ... with a
WHERE clause on non-existence in the (now empty) table.
In that case of course the WHERE clause is unnecessary, as it will always
evaluate as true (and we've locked the whole table for writes). Looks like
it is a lot worse than unnecessary, though, if it triggers this performance
snafu in EXPLAIN INSERT.
Ah-hah. So what's happening is that the planner is doing an indexscan
over the entire table of now-dead rows, looking vainly for an undeleted
maximal row. Ouch.
I wonder how hard it would be to make the indexscan give up after hitting
N consecutive dead rows, for some suitable N, maybe ~1000. From the
planner's viewpoint it'd be easy enough to fall back to using whatever
it had in the histogram after all. But that's all happening down inside
index_getnext, and I'm hesitant to stick some kind of wart into that
machinery for this purpose.
ISTM what we really want here is a time-based behavior, not number of
rows. Given that, could we do the index probe in a subtransaction, set
an alarm for X ms, and simply abort the subtransaction if the alarm fires?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance