Howdy. I'm curious what besides raw hardware speed determines the performance of a Seq Scan that comes entirely out of shared buffers… I ran the following on the client's server I'm profiling, which is otherwise idle:
EXPLAIN (ANALYZE ON, BUFFERS ON) SELECT * FROM notes;
Seq Scan on notes (cost=0.00..94004.88 rows=1926188 width=862) (actual time=0.009..1673.702 rows=1926207 loops=1)
Buffers: shared hit=74743
Total runtime: 3110.442 ms
(3 rows)
… and that's about 9x slower than what I get on my laptop with the same data. I ran stream-scaling on the machine and the results seem reasonable (8644.1985 MB/s with 1 core -> 25017 MB/s with 12 cores). The box is running 2.6.26.6-49 and postgresql 9.0.6.
I'm stumped as to why it's so much slower, any ideas on what might explain it… or other benchmarks I could run to try to narrow down the cause?
Thanks!
Matt