Hi Everyone,
I have a table with several hundred million rows of timestamped
values. Using pg_dump we are able to dump the entire table to disk no
problem. However, I would like to retrieve a large subset of data
from this table using something like:
COPY (SELECT * FROM history WHERE timestamp > '2009-01-01') TO STDOUT;
Executing this query causes our server to consume all available swap
and crash. Can anyone help me figure out what needs to be done to
allow this query to execute? How long it takes doesn't really matter
as long as it can be performed reliably. The database currently lives
on a Red Hat EL 5.3 server with 16GB RAM and 4GB swap running
PostgreSQL 8.3.7. Possibly relevant lines from postgresql.conf:
shared_buffers = 4GB
work_mem = 32MB
maintenance_work_mem = 1GB
effective_cache_size = 8GB
Tom
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general