AFAIK postgresql measure characteristic of the data distribution in the
tables and indexes (that is what vacuum ANALYSE does) , but results of
that measures are **weighted by** random_page_cost and
sequential_page_cost. So measurements are correct, but costs (weight)
should reflect a real speed for sequentional and random operation of the
storage device(s) (tablespaces) involved.
Jeremy Harris napsal(a):
On 08/17/2009 03:24 AM, Craig Ringer wrote:
On 16/08/2009 9:06 PM, NTPT wrote:
So I suggest we should have "random_page_cost" and
"Sequential_page_cost" configurable on per tablespace basis.
That strikes me as a REALLY good idea, personally, though I don't know
enough about the planner to factor in implementation practicalities and
any cost for people _not_ using the feature.
Could not pgsql *measure* these costs (on a sampling basis, and with long
time-constants)?
- Jeremy
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general