Greg Smith wrote:
Eliot Gable wrote:
Just curious if this would apply to PostgreSQL:
http://queue.acm.org/detail.cfm?id=1814327
It's hard to take this seriously at all when it's so ignorant of
actual research in this area. Take a look at
http://www.cc.gatech.edu/~bader/COURSES/UNM/ece637-Fall2003/papers/BFJ01.pdf
for a second
Interesting paper, thanks for the reference!
PostgreSQL is modeling a much more complicated situation where there
are many levels of caches, from CPU to disk. When executing a query,
the database tries to manage that by estimating the relative costs for
CPU operations, row operations, sequential disk reads, and random disk
reads. Those fundamental operations are then added up to build more
complicated machinery like sorting. To minimize query execution cost,
various query plans are considered, the cost computed for each one,
and the cheapest one gets executed. This has to take into account a
wide variety of subtle tradeoffs related to whether memory should be
used for things that would otherwise happen on disk. There are three
primary ways to search for a row, three main ways to do a join, two
for how to sort, and they all need to have cost estimates made for
them that balance CPU time, memory, and disk access.
Do you think that the cache oblivious algorithm described in the paper
could speed up index scans hitting the disk Postgres (and os/hardware)
multi level memory case? (so e.g. random page cost could go down?)
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance