I am trying to fully understand, how costs for queries are computed.
Taking the following example:
CREATE TABLE test (name varchar(250) primary key) ;
INSERT INTO test (name) VALUES(generate_series(1, 1000)::text) ;
ANALYZE test ;
EXPLAIN SELECT * FROM test WHERE name = '4' ;
I am getting the output:
Index Scan using test_pkey on test (cost=0.00..8.27 rows=1 width=3)
Index Cond: ((name)::text = '4'::text)
The server has default cost parameters
The value I want to understand is 8.27. From reading the book
"PostgreSQL 9.0 High Performance" I know, that we have one index page
read (random page read, cost=4.0) and one database row read (random page
read, cost=4.0) which comes up to a total of 8.0. But where are the
missing 0.27 from?
If I modify the example to insert 10,000 rows, the cost stays the same.
Only if I go for 100,000 rows will the computed cost increase to 8.29.
Can anybody enlighten me, please ;-).
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general