You could try inserting a lot more rows. I'd create a function to do several million inserts with random numbers and then analyze and rerun.
I think in the end your probably going to see a couple of bitmap index scans, anding the results together, and then a bitmap scan.
Keep in mind that postgres stores statistics information about each column, but doesn't at this point store statistics about two columns together.
Preparing the query might actually hurt performance because postgres treats a prepare as "plan this query but I'm not going to tell you value of the parameters". If you actually let the query replan every time then you will get a different plan for the leaderboards or score ranges that are more popular.
On Wed, May 27, 2009 at 8:09 AM, Zach Calvert <zachcalvert@xxxxxxxxxxxxxxxxxx> wrote:
So Google hasn't been helpful and I'm not entirely sure what to look
for in the mailing lists to find the answer to my problem, so here
goes.
I have a query and I have run
explain analyze
select count(*)
from score
where leaderboardid=35 and score <= 6841 and active
The result is
"Aggregate (cost=2491.06..2491.07 rows=1 width=0) (actual
time=38.878..38.878 rows=1 loops=1)"
" -> Seq Scan on score (cost=0.00..2391.17 rows=39954 width=0)
(actual time=0.012..30.760 rows=38571 loops=1)"
" Filter: (active AND (score <= 6841) AND (leaderboardid = 35))"
"Total runtime: 38.937 ms"
I have an index on score, I have an index on score and leaderboard and
active. I can't seem to figure out how to create an index that will
turn that "Seq Scan" into an index scan. The biggest problem is that
the query degrades very quickly with a lot more rows and I will be
getting A LOT MORE rows. What can I do to improve the performance of
this query?
Thank you so much,
ZC
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance