On 13/10/10 21:44, Mladen Gogala wrote:
On 10/13/2010 3:19 AM, Mark Kirkwood wrote:
I think that major effect you are seeing here is that the UPDATE has
made the table twice as big on disk (even after VACUUM etc), and it has
gone from fitting in ram to not fitting in ram - so cannot be
effectively cached anymore.
In the real world, tables are larger than the available memory. I have
tables of several hundred gigabytes in size. Tables shouldn't be
"effectively cached", the next step would be to measure "buffer cache
hit ratio", tables should be effectively used.
Sorry Mladen,
I didn't mean to suggest that all tables should fit into ram... but was
pointing out (one reason) why Neil would expect to see a different
sequential scan speed after the UPDATE.
I agree that in many interesting cases, tables are bigger than ram [1].
Cheers
Mark
[1] Having said that, these days 64GB of ram is not unusual for a
server... and we have many real customer databases smaller than this
where I work.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance