Re: performance question (something to do w/ parameterized

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I tried the seqscan disabling and got what sounds like the desired plan:

Sort  (cost=54900.62..54940.29 rows=1587 width=16) (actual time=20.208..22.138 rows=677 loops=1)
   Sort Key: f, c
   ->  Index Scan using x_f_idx, x_f_idx, ...
   (cost=0.00..54056.96 rows=1587 width=16) (actual time=1.048..15.598 rows=677 loops=1)
   Index Cond: ((f = 1) OR (f = 2) OR (f = 3) ....


I turned off the option in postgresql.conf and it did indeed improve all similar queries on that table
to have sub-second response time, down from 6/8/10 second responses.  And the elapsed time for
the application action reflected this improvement.

So that begs two questions:

1) is there a way to enable that for a single query in a multi-query transaction?

2) am I opening a can of worms if I turn it off server-wide?  (PROBABLY!)

I've already had to tune the server to account for the fact that
the database is easily cached in memory but the processors are slow. (PIII 550Mhz Xeons)
I've lowered the cost of random pages and raised the cost of per-row processing
as follows (where the configuration defaults are also noted):

# - Planner Cost Constants -

#JDT: default effective_cache_size = 1000       # typically 8KB each
effective_cache_size = 50000    # typically 8KB each
#JDT: default: random_page_cost = 4             # units are one sequential page fetch cost
random_page_cost = 2            # units are one sequential page fetch cost
#JDT: default: cpu_tuple_cost = 0.01            # (same)
cpu_tuple_cost = 0.10           # (same)
#cpu_index_tuple_cost = 0.001   # (same)
#JDT: default: cpu_operator_cost = 0.0025       # (same)
cpu_operator_cost = 0.025       # (same)


Any suggestion for how to fix today's query (turning seqscan off) without wrecking others is welcome, as well as whether I've
blundered on the above (which may or may not be optimal, but definitely fixed some former problem queries
on that machine).

My transactions are large multi-query serializable transactions, so it's also important that any single-query targeting optimization not affect other queries in the same transaction.

Thanks for the help.

Tom Lane wrote:
Jeffrey Tenny <jeffrey.tenny@xxxxxxxxxxx> writes:
I dropped the multicolumn index 'testindex2', and a new explain analyze looks like this:

Sort (cost=35730.71..35768.28 rows=1503 width=16) (actual time=962.555..964.467 rows=677 loops=1)
    Sort Key: f, c
-> Seq Scan on x (cost=0.00..34937.60 rows=1503 width=16) (actual time=5.449..956.594 rows=677 loops=1)
          Filter: ((f = 1) OR (f = 2) OR (f = 3) ...

Turning on the server debugging again, I got roughly identical
query times with and without the two column index.

That's good, actually, seeing that the planner thinks they're close to
the same speed too.  Now try "set enable_seqscan = off" to see if you
can force the multi-index-scan plan to be chosen, and see how that does.

			regards, tom lane



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux