Re: Getting rid of a seq scan in query on a large table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jens Hoffrichter <jens.hoffrichter@xxxxxxxxx> wrote:
 
> I'm having trouble getting rid of a sequential scan on a table
> with roughly 120k entries it.
 
Please post your configuration information and some information
about your hardware and OS.
 
http://wiki.postgresql.org/wiki/SlowQueryQuestions
 
Since the table scan went through about 120000 rows in 60 ms, it is
clear that your data is heavily cached, so random_page_cost should
probably be close to or equal to seq_page_cost, and that value
should probably be somewhere around 0.1 to 0.5.  You should have
effective_cache_size set to the sum of shared_buffers plus whatever
your OS cache is.  I have sometimes found that I get faster plans
with cpu_tuple_cost increased.
 
If such tuning does cause it to choose the plan you expect, be sure
to time it against what you have been getting.  If the new plan is
slower, you've taken the adjustments too far.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux