Re: Planner selects different execution plans depending on limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:

> Bill Martin <bill(dot)martin(at)communote(dot)com> writes:

>> I´ve created following table which contains one million records.

>> ...

 

>> "Limit  (cost=10091.09..19305.68 rows=3927 width=621) (actual time=0.255..0.255 rows=0 loops=1)"

>> "  ->  Bitmap Heap Scan on core_content content  (cost=10091.09..57046.32 rows=20011 width=621) (actual time=0.254..0.254 rows=0 loops=1)"

>> "        Recheck Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)"

>> "        ->  Bitmap Index Scan on ft_simple_core_content_content_idx  (cost=0.00..10086.09 rows=20011 width=0) (actual time=0.251..0.251 rows=0 loops=1)"

>> "              Index Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)"

>> "Total runtime: 0.277 ms"

 

>> Is there any posibility to tune up the performance even if the limit is only 10?

 

> The problem is the way-off rowcount estimate (20011 rows when it's

> really none); with a smaller estimate there, the planner wouldn't decide

> to switch to a seqscan.

> 

> Did you take the advice to increase the column's statistics target?

> Because 20011 looks suspiciously close to the default estimate that

> tsquery_opr_selec will fall back on if it hasn't got enough stats

> to come up with a trustworthy estimate for a *-pattern query.

> 

> (I think there are probably some bugs in tsquery_opr_selec's estimate

> for this, as I just posted about on pgsql-hackers.  But this number

> looks like you're not even getting to the estimation code, for lack

> of enough statistics entries.)

> 

> The other thing that seems kind of weird here is that the cost estimate

> for the bitmap index scan seems out of line even given the

> 20000-entries-to-fetch estimate.  I'd have expected a cost estimate of a

> few hundred for that, not 10000.  Perhaps this index is really bloated,

> and it's time to REINDEX it?

> 

>                      regards, tom lane

 

Hi,

thank you for helping me.

 

I´ve tried different values for the statistics but it is all the same (the planner decide to switch to a seqscan if the limit is 10).

 

ALTER TABLE core_content ALTER column content SET STATISTICS 1000;
 

I also tried to reindex the index but the planner decide to switch to a seqscan.

 

REINDEX INDEX ft_simple_core_content_content_idx;
 
Disable the seqscan helps me but is this a good decision for all use cases?
 
SET enable_seqscan = off;
 
Are there any other possibilities to solve my problem?
 
Best regards,
Bill Martin

 


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux