Re: how big shmmax is good for Postgres...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Some corrections:

On Thu, Jul 10, 2008 at 6:11 AM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote:

SNIP

> If you commonly have 100 transactions doing that at once, then you
> multiply much memory they use times 100 to get total buffer >> SPACE << in use,
> and the rest is likely NEVER going to get used.
>
> In these systems, what seems like a bad idea, lowering the
> buffer_size, might be the exact right call.
>
> For session servers and large transactional systems, it's often best
> to let the OS do the best caching of the most of the data, and have
> enough shared buffers to handle 2-10 times the in memory data set
> size.  This will result in a buffer size of a few hundred megabytes.
>
> The advantage here is that the (NOT OS) DATABASE doesn't have to spend a lot of time
> maintaining a large buffer pool and checkpoints are cheaper.
> The background writer can use spare >> CPU << and I/O cycles to write out
> the now smaller number of dirty pages in shared_memory and the system runs
> faster.

>
> Conversely, when you need large numbers of shared_buffers is when you
> have something like a large social networking site.  A LOT of people
> updating a large data set at the same time likely need way more
> shared_buffers to run well.  A user might be inputing data for several
> minutes or even hours.  The same pages are getting hit over and over
> too.  For this kind of app, you need as much memory as you can afford
> to throw at the problem, and a semi fast large RAID array.  A large
>  >> RAID << cache means your RAID controller / array only have to write, on
> average, as fast as the database commits.

Just minor edits.  If there's anything obviously wrong someone please
let me know.


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux