Josh Berkus wrote:
Tom,
Right, because they do. If you think
otherwise, demonstrate it.
(bonnie tests approximating a reverse seqscan are not relevant
to the performance of indexscans.)
Working on it. I *think* I've seen this issue in the field, which is
why I brought it up in the first place, but getting a good test case
is, of course, difficult.
I think I may be experiencing this situation now.
The query
select comment_date
from user_comments
where user_comments.uid=1
order by comment_date desc limit 1
Explain:
"Limit (cost=0.00..2699.07 rows=1 width=8) (actual
time=52848.785..52848.787 rows=1 loops=1)"
" -> Index Scan Backward using idx_user_comments_comment_date on
user_comments (cost=0.00..5789515.40 rows=2145 width=8) (actual
time=52848.781..52848.781 rows=1 loops=1)"
" Filter: (uid = 1)"
"Total runtime: 52848.840 ms"
takes 10's of seconds to complete (52 sec last run). However
select comment_date
from user_comments
where user_comments.uid=1
order by comment_date limit 1
Explain:
"Limit (cost=0.00..2699.07 rows=1 width=8) (actual time=70.402..70.403
rows=1 loops=1)"
" -> Index Scan using idx_user_comments_comment_date on
user_comments (cost=0.00..5789515.40 rows=2145 width=8) (actual
time=70.398..70.398 rows=1 loops=1)"
" Filter: (uid = 1)"
"Total runtime: 70.453 ms"
takes well under 1 sec.
reply_date is a timestamp with time zone and has the index
CREATE INDEX idx_user_comments_comment_date
ON user_comments
USING btree
(comment_date);
I don't understand why it is so much slower to scan it reverse
It's a fairly big table. About 4.4 million rows, 888MB. That index is
96MB. I tried dropping and recreating the index, but it doesn't seem to
have helped any.
Can I create a reverse index on the dates so it can do a forward scan
of the reverse index?
|