On Mon, Feb 9, 2009 at 2:01 PM, Matt Magoffin <postgresql.org@xxxxxxx> wrote: >> I don't think changing work_mem down is actually going to reduce the >> memory allocated without changing the plan to something less optimal. >> In the end, all of this is putting off the inevitable, if you get enough >> PGs going and enough requests and whatnot, you're going to start running >> out of memory again. Same if you get larger data sets that take up more >> hash table space or similar. Eventually you might need a bigger box, >> but let's try to get everything in the current box to at least be used >> first.. > > Yes... and indeed changing vm.overcommit_ratio to 80 does allow that > previously-failing query to execute successfully. Do you think this is > also what caused the out-of-memory error we saw today just when a > transaction was initiated? Curious, what's the explain analyze look like for that one? -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general