Re: https://stackoverflow.com/questions/28844170/how-to-limit-the-memory-that-is-available-for-postgressql-server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 19 Sep 2017 00:49:14 +0000, ???? <shohei.nkapl@xxxxxxxxx> wrote:

For an academic experiment I need to *restrict the total amount of memory
that is available for a pgSQL server* to compute a given set of queries.

I know that I can do this through postgressql.conffile, where I can adjust
some parameters related with Resource Management.

The problem is that: it's not clear for me--given the several parameters
available on the config file--which is the parameter that I should change.

When I first opened the config file I'm expecting someting like this:
max_server_memmory. Instead I found a lot of: shared_buffers, temp_buffers,
work_mem, and so on...

Given that, I've consulted pgSQL docs. on Resource Consumption
<http://www.postgresql.org/docs/9.3/static/runtime-config-resource.html and
I come up with the shared_buffers as the best candidate for what I'm
looking for: *the parameter that restricts the total amount of memory that
a pgSQL server can use to perform its computation*. But I'm not completely
sure about this.

Can you guys give me some insight about which parameters should I adjust to
restrict the pgSQL server's memory, please?

What you are asking - a way to configure Postgresql to a hard memory limit - effectively is impossible.  Shared memory isn't really a hard limit on anything - it's just a cache for query results.  You can limit how much is available, but there isn't any way to limit how much a particular query [worker process] can take.  Then, local [to the worker process] work buffers are allocated as needed to perform the joins, sorts, groupings, etc. as specified by the query.  For any given query, you may be able to explain/analyze your way to a reasonable estimate of the maximum allocation, but there isn't any way via configuration to actually limit the worker process to that maximum.

The only way I can think of to impose such limits would be to sandbox the processes with ULIMIT.  If you set appropriate limits before starting the postmaster process, those limits will apply to every worker process it spawns afterwards.   The thing to remember is that limits on processes apply individually - e.g., if you say "ulimit -d 500000" and then start Postgresql, each individual worker process will be able to use up to 500MB.  And when you limit the data size or the address space, you need to consider and include the shared memory.
see https://ss64.com/bash/ulimit.html

If you want to place a global limit on the entire Postgresql "server" [i.e. the collection of worker processes], you can limit the user that owns the processes (in /etc/security/limits.conf) - which usually is "postgres" when Postgresql is run as a service.


Using ulimit isn't difficult if you are starting/stopping Postgresql manually, but it's a pain when Postgresql is running as a system service.  To limit a service, you have to either limit the owning user [and hope that doesn't break something else], or find and edit the init scripts that start the service, and what to do there depends on whether the system is using SysVinit or Upstart to manage the services.

If you're on Windows, good luck.   I know that there are things called "Job objects"  [something in between Linux sessions and process groups] that can be used to limit process resources ... but I have no idea how to do that.

Hope this ... doesn't confuse even more.
George


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux