<get connection from pool>
set work_mem = '512MB'
query
set value = 'default'
<return connection to pool>
or maybe
<get connection from pool>
BEGIN;
set work_mem='512MB'
select query
ROLLBACK;
<return connection to pool>
On Wed, Aug 18, 2010 at 11:52 PM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote:
Exactly, it's about the concurrency. I have a server with 128G ram
but it runs dozens of queries at a time for hundreds of clients a
second. The chance that something big for work_mem might jump up and
bite me are pretty good there. Even so, at 16Meg it's not really big
for that machine, and I might test cranking it up. Note that large
work_mem can cause the kernel to flush its cache, which means going to
disk for everybody's data, and all the queries are slow instead of
one. Keep an eye on how high work_mem affects your kernel cache.
On Thu, Aug 19, 2010 at 12:38 AM, Samuel Gendler
--<sgendler@xxxxxxxxxxxxxxxx> wrote:
> Yeah, although with 48GB of available memory and not that much concurrency,
> I'm not sure it matters that much. But point taken, I'll see about modifying
> the app such that work_mem gets set on a per-query basis.
>
> On Wed, Aug 18, 2010 at 11:24 PM, Scott Marlowe <scott.marlowe@xxxxxxxxx>
> wrote:
>>
>> On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler
>> <sgendler@xxxxxxxxxxxxxxxx> wrote:
>> > Answered my own question. Cranking work_mem up to 350MB revealed that
>> > the in-memory sort requires more memory than the disk sort.
>>
>> Note that unless you run VERY few client connections, it's usually
>> better to leave work_mem somewhere in the 1 to 32Meg range and have
>> the connection or user or database that needs 350Meg be set there.
>>
>> I.e.
>>
>> <connect>
>> set work_mem='512MB';
>> <execute query
>>
>> OR
>>
>> alter user memoryhog set work_mem='512MB';
>>
>> OR
>>
>> alter database memhogdb set work_mem='512MB';
>
>
To understand recursion, one must first understand recursion.