Search Postgresql Archives

Re: shared_buffers 8GB maximum

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 17 Feb 2018 00:36:57 +0200, Vitaliy Garnashevich
<vgarnashevich@xxxxxxxxx> wrote:


>- I'm not a huge Linux expert, but I've heard someone saying that 
>reading from the filesystem cache requires a context switch. 

Yes.

>I suspect >that such reads are slightly more expensive now after the 
>Meltdown/Spectre patch in the kernel.

Not necessarily - it depends on exactly what was changed ... which
unfortunately I don't know for certain.

Any filesystem call is a kernel transition.  That's a Meltdown issue. 
Meltdown can be avoided by using trampoline functions to call the
(real) kernel functions and isolating each trampoline so that no other
code immediately follows it.  This wastes some memory but there is
very little added time cost.


Spectre is about snooping within the user space of a single process -
it has nothing to do with kernel calls.  The issues with Spectre are
things like untrusted code breaking out of "sandboxes", snooping on
password handling or encryption, etc.

Fixing Spectre requires purposefully limiting speculative execution of
code and can significantly affect performance.  But the effects are
situation dependent.


>Could that be a reason for increasing the value of shared_buffers?
>
>- Could shared_buffers=128GB or more on a 250 GB RAM server be a 
>reasonable setting? What downsides could there be?

It depends.  8GB is pretty small for such a large server, but taking
1/2 the RAM is not necessarily the right thing either.

The size of shared buffers affects log size and the time to complete
checkpoints.  If a large(ish) percentage of your workload is writes,
having a very large shared space could be bad for performance, or bad
for space on the log device.

Another reason may be that the server is not dedicated to PG but does
other things as well.  Dramatically increasing PG's memory use may
negatively impact something else.


>PS. Some background. We had shared_buffers=8GB initially. In 
>pg_stat_bgwriter we saw that dirty buffers were written to disk more 
>frequently by backends than during checkpoints (buffers_clean > 
>buffers_checkpoint, and buffers_backend > buffers_checkpoint). According 
>to pg_buffercache extension, there was very small percentage of dirty 
>pages in shared buffers. The percentage of pages with usagecount >= 3 
>was also low. Some of our more frequently used tables and indexes are 
>more than 10 GB in size. This all suggested that probably the bigger 
>tables and indexes, whenever scanned, are constantly flushing pages from 
>the shared buffers area. After increasing shared_buffers to 32GB, the 
>picture started looking healthier. There were 1GB+ of dirty pages in 
>shared buffers (vs 40-200MB before), 30-60% of pages with usagecount >= 
>3 (vs 10-40% before), buffers_checkpoint started to grow faster than 
>buffers_clean or buffers_backend. There is still not all frequently used 
>data fits in shared_buffers, so we're considering to increase the 
>parameter more. I wanted to have some idea about how big it could 
>reasonably be.

So now you know that 32GB is better for your workload than 8GB.  But
that is not necessarily a reason immediately to go crazy with it.  Try
increasing it gradually - e.g., adding 16GB at a time - and see if the
additional shared space provides any real benefit.


>PPS. I know any possible answer might be just a recommendation, and 
>parameter values should be tested for each specific case, but still 
>wanted to hear your opinion. Thanks.
>
>Regards,
>Vitaliy

George






[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux