On Feb 2, 2007, at 9:46 AM, Tom Lane wrote:
=?ISO-8859-1?Q?G=E1briel_=C1kos?= <akos.gabriel@xxxxxxxxxx> writes:
Richard Huxton wrote:
Kirk Wythers wrote:
I am trying to do fairly simple joins on climate databases that
should
return ~ 7 million rows of data.
If you look at the message carefully, it looks like (for me) that the
client is running out of memory. Can't allocate that 8,4MB :)
Right, the join result doesn't fit in the client's memory limit.
This is not too surprising, as the out-of-the-box ulimit settings
on Tiger appear to be
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) 6144
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 266
virtual memory (kbytes, -v) unlimited
$
6 meg of memory isn't gonna hold 7 million rows ... so either raise
"ulimit -d" (quite a lot) or else use a cursor to fetch the result
in segments.
Thanks Tom... Any suggestions as to how much to raise ulimit -d? And
how to raise ulimit -d?