Re: Possible explanations for catastrophic performance deterioration?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Carlos Moreno wrote:

> That is:  the first time I run the query, it has to go through the
> disk; in the normal case it would have to read 100MB of data, but due
> to bloating, it actually has to go through 2GB of data.   Ok, but
> then, it will load only 100MB  (the ones that are not "uncollected
> disk garbage") to memory.  The next time that I run the query, the
> server would only need to read 100MB from memory --- the result should
> be instantaneous...

Wrong.  If there is 2GB of data, 1900MB of which is dead tuples, those
pages would still have to be scanned for the count(*).  The system does
not distinguish "pages which have no live tuples" from other pages, so
it has to load them all.

-- 
Alvaro Herrera                 http://www.amazon.com/gp/registry/CTMLCN8V17R4
"[PostgreSQL] is a great group; in my opinion it is THE best open source
development communities in existence anywhere."                (Lamar Owen)

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux