On 29/12/13 10:03, Kevin Grittner wrote:
Michael Kolomeitsev <mkolomeitsev@xxxxxxxxx> wrote:
it is clear for me why t1_b_a_idx is better. The question is: Is
postgresql able to see that?
For a number of reasons I never consider a bulk load complete until
I run VACUUM FREEZE ANALYZE on the table(s) involved. When I try
your test case without that, I get the bad index choice. When I
then run VACUUM FREEZE ANALYZE on the database I get the good index
choice.
There may be some lesser maintenance which sets up visibility
information and provides the planner with enough data to make a
good choice, I just noticed that you were not following what I
consider to be rote good practice, tried it, and it solved the
problem.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Curious: Would it be feasible to do some kind of ANALYZE during a bulk
operation? Say if you could tell the system you expected to change 20%
of the records in advance: then you could sample some of the changes and
modify the statistics with 0.2 times that plus 0.8 of the pre-existing
statistics.
BEGIN BULK OPERATION CHANGE 20%
[... several transactions ...]
END BULK OPERATION
The sampling could be done as part of the individual operations or at
the end of the bulk operation - whichever is deemed more practicable
(possibly a bit of both?).
Cheers,
Gavin
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance