Re: Benchmarking a large server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 9 May 2011, David Boreham wrote:

On 5/9/2011 6:32 PM, Craig James wrote:
Maybe this is a dumb question, but why do you care? If you have 1TB RAM and just a little more actual disk space, it seems like your database will always be cached in memory anyway. If you "eliminate the cach effect," won't the benchmark actually give you the wrong real-life results?

The time it takes to populate the cache from a cold start might be important.

you may also have other processes that will be contending with the disk buffers for memory (for that matter, postgres may use a significant amount of that memory as it's producing it's results)

David Lang

Also, if it were me, I'd be wanting to check for weird performance behavior at this memory scale. I've seen cases in the past where the VM subsystem went bananas because the designers and testers of its algorithms never considered the physical memory size we deployed.

How many times was the kernel tested with this much memory, for example ? (never??)





--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux