Scott Carey wrote:
If postgres is memory bandwidth constrained, what can be done to reduce its bandwidth use? Huge Pages could help some, by reducing page table lookups and making overall access more efficient. Compressed pages (speedy / lzo) in memory can help trade CPU cycles for memory usage for certain memory segments/pages -- this could potentially save a lot of I/O too if more pages fit in RAM as a result, and also make caches more effective.
The problem with a lot of these ideas is that they trade the memory problem for increased disruption to the CPU L1 and L2 caches. I don't know how much that moves the bottleneck forward. And not every workload is memory constrained, either, so those that aren't might suffer from the same optimizations that help in this situation.
I just posted my slides from my MySQL conference talk today at http://projects.2ndquadrant.com/talks , and those include some graphs of recent data collected with stream-scaling. The current situation is really strange in both Intel and AMD's memory architectures. I'm even seeing situations where lightly loaded big servers are actually outperformed by small ones running the same workload. The 32 and 48 core systems using server-class DDR3/1333 just don't have the bandwidth to a single core that, say, an i7 desktop using triple-channel DDR3-1600 does. The trade-offs here are extremely hardware and workload dependent, and it's very easy to tune for one combination while slowing another.
-- Greg Smith 2ndQuadrant US greg@xxxxxxxxxxxxxxx Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance