My work_mem setting is the default (1MB),
but I’m doing a simple “select * from schema.table” - -
no joins, order by, distinct, hashes in the statement – so I dn’t
think that’s it.
I’m getting the error in psql, so based
on what Scott said, it sounds like there’s nothing to do server side.
Thanks!
From: Anj Adu
[mailto:fotographs@xxxxxxxxx]
Sent: Wednesday, July 15, 2009
12:18 PM
To: Lee, Mija
Cc: pgsql-admin@xxxxxxxxxxxxxx
Subject: Re: out of memory
for query, partitioning & vacuuming
What are your work_mem
settings ? Work_mem limits the amount of memory used before using the disk. You
may have a large value and a few sessions may end up using all the available
memory.
Read this on work_mem
http://www.postgresql.org/docs/8.3/static/runtime-config-resource.html
On Wed, Jul 15, 2009 at 11:59 AM, Lee, Mija <mija@xxxxxxxxxx> wrote:
Hi -
I'm not a particularly experienced dba, so I'm hoping this isn't a
ridiculous question.
I have a 5 GB table with lots of churn in a 14 GB database. Querying
this one table without limits has just started throwing "out of memory
for query" from multiple clients (psql, java). I'm working with the
developers to look at it from the client side, but I'm wondering if
either partitioning the table or more aggressive vacuuming of this table
would have an effect on the out of memory problem.
I'm running 8.3.5 on solaris 10.
Thanks!
Mija
CONFIDENTIALITY NOTICE: This e-mail message, including any attachments,
is for the sole use of the intended recipient(s) and may contain
confidential, proprietary, and/or privileged information protected by
law. If you are not the intended recipient, you may not use, copy, or
distribute this e-mail message or its attachments. If you believe you
have received this e-mail message in error, please contact the sender by
reply e-mail and destroy all copies of the original message.
--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin