Re: postgresql.conf recommendations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote:
> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <charlesrg@xxxxxxxxxxx> wrote:
>> I've benchmarked shared_buffers with high and low settings, in a server
>> dedicated to postgres with 48GB my settings are:
>> shared_buffers = 37GB
>> effective_cache_size = 38GB
>>
>> Having a small number and depending on OS caching is unpredictable, if the
>> server is dedicated to postgres you want make sure postgres has the memory.
>> A random unrelated process doing a cat /dev/sda1 should not destroy postgres
>> buffers.
>> I agree your problem is most related to dirty background ration, where
>> buffers are READ only and have nothing to do with disk writes.
>
> You make an assertion here but do not tell us of your benchmarking
> methods.

Well, he is not the only one committing that sin.

> My testing in the past has show catastrophic performance
> with very large % of memory as postgresql buffers with heavy write
> loads, especially transactional ones.  Many others on this list have
> had the same thing happen.

People also have problems by setting it too low.  For example, doing
bulk loads into indexed tables becomes catastrophically bad when the
size of the index exceeds shared_buffers by too much (where "too much"
depends on kernel, IO subsystem, and settings of vm.dirty* ) , and
increasing shared_buffers up to 80% of RAM fixes that (if 80% of RAM
is large enough to hold the indexes being updated).

Of course when doing bulk loads into truncated tables, you should drop
the indexes.  But if bulk loading into live tables, that is often a
cure worse than the disease.

> Also you supposed PostgreSQL has a better
> / smarter caching algorithm than the OS kernel, and often times this
> is NOT the case.

Even if it is not smarter as an algorithm, it might still be better to
use it.  For example,  "heap_blks_read", "heap_blks_hit", and friends
become completely useless if most block "reads" are not actually
coming from disk.

Also, vacuum_cost_page_miss is impossible to tune if some unknown but
potentially large fraction of those misses are not really misses, and
that fraction changes from table to table, and from wrap-around scan
to vm scan on the same table.

> In this particular instance the OP may not be seeing an issue from too
> large of a pg buffer, my point still stands, large pg_buffer can cause
> problems with heavy or even moderate write loads.

Sure, but that can go the other way as well.  What additional
instrumentation is needed so that people can actually know which is
the case for them?

Cheers,

Jeff


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux