On Tue, Oct 14, 2008 at 5:08 PM, Rainer Mager <rainer@xxxxxxxxxx> wrote: > I have an interesting performance improvement need. As part of the automatic > test suite we run in our development environment, we re-initialize our test > database a number of times in order to ensure it is clean before running a > test. We currently do this by dropping the public schema and then recreating > our tables (roughly 30 tables total). After that we do normal inserts, etc, > but never with very much data. My question is, what settings can we tweak to > improve performance is this scenario? Specifically, if there was a way to > tell Postgres to keep all operations in memory, that would probably be > ideal. I'm not sure we've identified the problem just yet. Do you have the autovacuum daemon enabled? Are your system catalogs bloated with dead tuples? Could you approach this by using a template database that was pre-setup for you and you could just "create database test with template test_template" or something like that? Also, if you're recreating a database you might need to analyze it first before you start firing queries. PostgreSQL buffers / caches what it can in shared_buffers. The OS also caches data in kernel cache. Having more memory makes it much faster. But if it writes, it needs to write. If the database server where this is happening doesn't have any important data in it, you might see a boost from disabling fsync, but keep in mind a server that loses power / crashes can corrupt all your databases in the db server. If you can fix the problem with any of those suggestions you might need to throw hardware at the problem. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance