I am restoring a fairly sizable database from a pg_dump file (COPY FROM STDIN style of data) -- the pg_dump file is ~40G. My system has 4 cores, and 12G of RAM. I drop, then recreate the database, and I do this restore via a: cat dumpfile | psql db_name. The trouble is that my system free memory (according to top) goes to about 60M, which causes all operations on the server to grind to a halt, and this 40G restore will take a couple hours to complete. I noted that the restore file doesn't do anything inappropriate such as creating indices BEFORE adding the data or anything - thus I can only suspect that my trouble has to do with performance tuning ineptitude in postgresql.conf. My settings (ones that I have changed): shared_buffers = 512MB temp_buffers = 512MB work_mem = 256MB maintenance_work_mem = 64MB max_fsm_pages = 655360 vacuum_cost_page_hit = 3 Any insight would be most appreciated. r.b. Robert W. Burgholzer Surface Water Modeler Office of Water Supply and Planning Virginia Department of Environmental Quality rwburgholzer@xxxxxxxxxxxxxxxx 804-698-4405 Open Source Modeling Tools: http://sourceforge.net/projects/npsource/ -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin