Also, as Tom stated, defining your test cases is a good idea before you start benchmarking. Our application has a load data phase, then a query/active use phase. So, we benchmark both (data loads, and then transactions) since they're quite different workloads, and there's different ways to optimize for each. For bulk loads, I would look into either batching several inserts into one transaction or the copy command. Do some testing here to figure out what works best for your hardware/setup (for example, we usually batch several thousand inserts together for a pretty dramatic increase in performance). There's usually a sweet spot in there depending on how your WAL is configured and other concurrent activity. Also, when testing bulk loads, be careful to setup a realistic test. If your application requires foreign keys and indexes, these can significantly slow down bulk inserts. There's several optimizations- check the mailing lists and the manual. And lastly, when you're loading tons of data, as previously pointed out, the normal state of the system is to be heavily utilized (in fact, I would think this is ideal since you know you're making full use of your hardware). HTH, Bucky -----Original Message----- From: pgsql-performance-owner@xxxxxxxxxxxxxx [mailto:pgsql-performance-owner@xxxxxxxxxxxxxx] On Behalf Of Mark Lewis Sent: Thursday, August 24, 2006 9:40 AM To: Fredrik Israelsson Cc: pgsql-performance@xxxxxxxxxxxxxx Subject: Re: [PERFORM] Is this way of testing a bad idea? > Monitoring the processes using top reveals that the total amount of > memory used slowly increases during the test. When reaching insert > number 40000, or somewhere around that, memory is exhausted, and the the > systems begins to swap. Each of the postmaster processes seem to use a > constant amount of memory, but the total memory usage increases all the > same. So . . . . what's using the memory? It doesn't sound like PG is using it, so is it your Java app? If it's the Java app, then it could be that your code isn't remembering to do things like close statements, or perhaps the max heap size is set too large for your hardware. With early RHEL3 kernels there was also a quirky interaction with Sun's JVM where the system swaps itself to death even when less than half the physical memory is in use. If its neither PG nor Java, then perhaps you're misinterpreting the results of top. Remember that the "free" memory on a properly running Unix box that's been running for a while should hover just a bit above zero due to normal caching; read up on the 'free' command to see the actual memory utilization. -- Mark ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings