Re: how to estimate shared_buffers...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jul 12, 2008 at 5:30 AM, Jessica Richard <rjessil@xxxxxxxxx> wrote:
> On a running production machine, we have 900M configured on a 16G-memory
> Linux host. The db size for all dbs combined is about 50G.  There are many
> transactions going on all the times (deletes, inserts, updates). We do not
> have a testing environment that has the same setup and the same amount of
> workload. I want to evaluate on the production host if this 900M is enough.
> If not, we still have room to go up a little bit to speed up all Postgres
> activities. I don't know enough about the SA side. I just would imagine, if
> something like "top" command or other tools can measure how much total
> memory Postgres is actually using (against the configured 900M shared
> buffers), and if Postgres is using almost 900M all the time, I would take
> this as an indication that the shared_buffers can go up for another 100M...
>
> What is the best way to tell how much memory Postgres (all Postgres related
> things) is actually using?

If you've got a 50G data set, then postgresql is most likely using
whatever memory you give it for shared buffers.  top should show that
easily.

I'd say start at 25% ~ 4G (this is a 64 bit machine, right?).  That
leaves plenty of memory for the OS to cache data, and for postgresql
to allocate work_mem type stuff from.


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux