Re: reducing random_page_cost from 4 to 2 to force index scan

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jim Nasby wrote:
I think the challenge there would be how to define the scope of the hot-spot. Is it the last X pages? Last X serial values? Something like correlation?

Hmm... it would be interesting if we had average relation access times for each stats bucket on a per-column basis; that would give the planner a better idea of how much IO overhead there would be for a given WHERE clause

You've already given one reasonable first answer to your question here. If you defined a usage counter for each histogram bucket, and incremented that each time something from it was touched, that could lead to a very rough way to determine access distribution. Compute a ratio of the counts in those buckets, then have an estimate of the total cached percentage; multiplying the two will give you an idea how much of that specific bucket might be in memory. It's not perfect, and you need to incorporate some sort of aging method to it (probably weighted average based), but the basic idea could work.

--
Greg Smith   2ndQuadrant US    greg@xxxxxxxxxxxxxxx   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux