Re: : Tracking Full Table Scans

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes. I am looking for the justified full table scans.

If bigger tables are getting scanned, I would like to know %age rows scanned against %age rows as the output.

If the query needs 80% of the rows as the output, then a full table scan is always better.

I believe there is a possibility for this in Postgres. I think we can get this using pg_stat_user_table, pg_statio_user_tables and pg_stats.

I will post the calculation once it i get it.

Thanks
VB

On Wed, Sep 28, 2011 at 6:25 AM, Craig Ringer <ringerc@xxxxxxxxxxxxx> wrote:
On 09/28/2011 12:26 AM, Venkat Balaji wrote:
Thanks a lot Kevin !!

Yes. I intended to track full table scans first to ensure that only
small tables or tables with very less pages are (as you said) getting
scanned full.

It can also be best to do a full table scan of a big table for some queries. If the query needs to touch all the data in a table - for example, for an aggregate - then the query will often complete fastest and with less disk use by using a sequential scan.

I guess what you'd really want to know is to find out about queries that do seqscans to match relatively small fractions of the total tuples scanned, ie low-selectivity seqscans. I'm not sure whether or not it's possible to gather this data with PostgreSQL's current level of stats detail.

--
Craig Ringer


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux