Re: https://stackoverflow.com/questions/28844170/how-to-limit-the-memory-that-is-available-for-postgressql-server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/18/2017 10:44 PM, George Neuner wrote:
> On Tue, 19 Sep 2017 00:49:14 +0000, ???? <shohei.nkapl@xxxxxxxxx> wrote:
> 
>> For an academic experiment I need to *restrict the total amount of memory
>> that is available for a pgSQL server* to compute a given set of queries.
>>
>> I know that I can do this through postgressql.conffile, where I can
>> adjust
>> some parameters related with Resource Management.
>>
>> The problem is that: it's not clear for me--given the several parameters
>> available on the config file--which is the parameter that I should
>> change.
>>
>> When I first opened the config file I'm expecting someting like this:
>> max_server_memmory. Instead I found a lot of: shared_buffers,
>> temp_buffers,
>> work_mem, and so on...
>>
>> Given that, I've consulted pgSQL docs. on Resource Consumption
>> <http://www.postgresql.org/docs/9.3/static/runtime-config-resource.html and
>>
>> I come up with the shared_buffers as the best candidate for what I'm
>> looking for: *the parameter that restricts the total amount of memory
>> that
>> a pgSQL server can use to perform its computation*. But I'm not
>> completely
>> sure about this.
>>
>> Can you guys give me some insight about which parameters should I
>> adjust to
>> restrict the pgSQL server's memory, please?
> 
> What you are asking - a way to configure Postgresql to a hard memory
> limit - effectively is impossible.  Shared memory isn't really a hard
> limit on anything - it's just a cache for query results.  You can limit
> how much is available, but there isn't any way to limit how much a
> particular query [worker process] can take.  Then, local [to the worker
> process] work buffers are allocated as needed to perform the joins,
> sorts, groupings, etc. as specified by the query.  For any given query,
> you may be able to explain/analyze your way to a reasonable estimate of
> the maximum allocation, but there isn't any way via configuration to
> actually limit the worker process to that maximum.
> 
> The only way I can think of to impose such limits would be to sandbox
> the processes with ULIMIT.  If you set appropriate limits before
> starting the postmaster process, those limits will apply to every worker
> process it spawns afterwards.   The thing to remember is that limits on
> processes apply individually - e.g., if you say "ulimit -d 500000" and
> then start Postgresql, each individual worker process will be able to
> use up to 500MB.  And when you limit the data size or the address space,
> you need to consider and include the shared memory.
> see https://ss64.com/bash/ulimit.html
> 
> If you want to place a global limit on the entire Postgresql "server"
> [i.e. the collection of worker processes], you can limit the user that
> owns the processes (in /etc/security/limits.conf) - which usually is
> "postgres" when Postgresql is run as a service.


The easiest way to impose a limit on the entire Postgres cluster is to
run it in a container using Docker. For example you could use the image
from hub.docker.com and run it with the "--memory" argument.

https://hub.docker.com/_/postgres/
https://docs.docker.com/engine/reference/commandline/run/

-- 
Jonathan Rogers
Socialserve.com by Emphasys Software
jrogers@xxxxxxxxxxxxxxxxxxxxx

Attachment: signature.asc
Description: OpenPGP digital signature


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux