Re: Volunteer to build a configuration tool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greg,
I have a PostgreSQL database that runs on a dedicated server.  The
server has 24Gig of memory.  What would be the max size I would ever
want to set the shared_buffers to if I where to relying on the OS for
disk caching approach?  It seems that no matter how big your dedicated
server is there would be a top limit to the size of shared_buffers.
 
Thanks,

Lance Campbell
Project Manager/Software Architect
Web Services at Public Affairs
University of Illinois
217.333.0382
http://webservices.uiuc.edu
 
-----Original Message-----
From: pgsql-performance-owner@xxxxxxxxxxxxxx
[mailto:pgsql-performance-owner@xxxxxxxxxxxxxx] On Behalf Of Greg Smith
Sent: Thursday, June 21, 2007 2:15 AM
To: pgsql-performance@xxxxxxxxxxxxxx
Subject: Re: [PERFORM] Volunteer to build a configuration tool

On Wed, 20 Jun 2007, Campbell, Lance wrote:

> If everything I said is correct then I agree "Why have 
> effective_cache_size?"  Why not just go down the approach that Oracle 
> has taken and require people to rely more on shared_buffers and the 
> general memory driven approach?  Why rely on the disk caching of the
OS?

First off, it may help explain the dynamics here if you know that until 
fairly recent releases, the PostgreSQL shared_buffers cache had some 
performance issues that made it impractical to make it too large.  It 
hasn't been that long that relying more heavily on the Postgres cache
was 
technically feasible.  I think the user community at large is still 
assimilating all the implications of that shift, and as such some of the

territory with making the Postgres memory really large is still being 
mapped out.

There are also still some issues left in that area.  For example, the 
bigger your shared_buffers cache is, the worse the potential is for
having 
a checkpoint take a really long time and disrupt operations.  There are
OS 
tunables that can help work around that issue; similar ones for the 
PostgreSQL buffer cache won't be available until the 8.3 release.

In addition to all that, there are still several reasons to keep relying

on the OS cache:

1) The OS cache memory is shared with other applications, so relying on
it 
lowers the average memory footprint of PostgreSQL.  The database doesn't

have to be a pig that constantly eats all the memory up, while still 
utilizing it when necessary.

2) The OS knows a lot more about the disk layout and similar low-level 
details and can do optimizations a platform-independant program like 
Postgres can't assume are available.

3) There are more people working on optimizing the caching algorithms in

modern operating systems than are coding on this project.  Using that 
sophisticated cache leverages their work.

"The Oracle Way" presumes that you've got such a massive development
staff 
that you can solve these problems better yourself than the community at 
large, and then support that solution on every platform.  This is why
they 
ended up with solutions like raw partitions, where they just put their
own 
filesystem on the disk and figure out how to make that work well 
everywhere.  If you look at trends in this area, at this point the 
underlying operating systems have gotten good enough that tricks like
that 
are becoming marginal.  Pushing more work toward the OS is a completely 
viable design choice that strengthens every year.

--
* Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux