Re: PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 1, 2012 at 9:28 PM, Peter van Hardenberg <pvh@xxxxxx> wrote:
>> Setting work_mem to hundreds of MB in a 4G system is suicide. Tens
>> even is dangerous.
>>
>
> Why do you say that? We've had work_mem happily at 100MB for years. Is
> there a particular degenerate case you're concerned about?

Me too.

But I've analyzed the queries I'll be sending to the database and I've
carefully bound the effective amount of memory used given the load
I'll be experiencing.

Saying that it should be set to 100M without consideration for those
matters is the suicide part. work_mem applies to each sort operation.
Suppose, just for the sake of argument, that each connection is
performing 5 such sorts (ie, 5 joins of big tables - not unthinkable),
then suppose you have your max_connections to the default of 100, then
the system could request as much as 50G of ram.

I set work_mem higher in my database system since I *know* most of the
connections will not perform any merge or hash joins, nor will they
sort the output, so they won't use work_mem even once. The ones that
will, I have limited on the application side to a handful, hence I
*know* that 50G theoretical maximum will not be reached.

Can the OP say that? I have no reason to think so. Hence I don't
suggest 100M is OK on a 4G system.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux