On Wed, Feb 8, 2017 at 7:44 AM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
Albe Laurenz <laurenz.albe@xxxxxxxxxx> writes:
> Bill Moran wrote:
>> What I feel is the best way to mitigate the situation, is to have some
>> setting that limits the maximum RAM any backend can consume.
> I'd delegate that problem to the operating system which, after all,
> should know best of all how much memory a process uses.
I've had some success using ulimit in the past, although it does have
the disadvantage that you have to impose the same limit on every PG
process. (You set it before starting the postmaster and it inherits
to every child process.) If memory serves, limiting with the -v switch
works better than -d or -m on Linux; but I might be misremembering.
Conceivably we could add code to let the ulimit be set per-process,
if the use-case were strong enough.
To implement a limit inside PG, we'd have to add expensive bookkeeping
to the palloc/pfree mechanism, and even that would be no panacea because
it would fail to account for memory allocated directly from malloc.
Hence, you could be pretty certain that it would be wildly inaccurate
for sessions using third-party code such as PostGIS or Python. An
OS-enforced limit definitely sounds better from here.
Confirming what Tom said, with respect to the specific example in this thread, a large proportion of the allocations in memory hungry bits of PostGIS are in fact using bare malloc via the GEOS library.
P