I am evaluating PostgreSQL as a candiate to cooperate with a java application. Performance test set up: Only one table in the database schema. The tables contains a bytea column plus some other columns. The PostgreSQL server runs on Linux. Test execution: The java application connects throught TCP/IP (jdbc) and performs 50000 inserts. Result: Monitoring the processes using top reveals that the total amount of memory used slowly increases during the test. When reaching insert number 40000, or somewhere around that, memory is exhausted, and the the systems begins to swap. Each of the postmaster processes seem to use a constant amount of memory, but the total memory usage increases all the same. Questions: Is this way of testing the performance a bad idea? Actual database usage will be a mixture of inserts and queries. Maybe the test should behave like that instead, but I wanted to keep things simple. Why is the memory usage slowly increasing during the whole test? Is there a way of keeping PostgreSQL from exhausting memory during the test? I have looked for some fitting parameters to used, but I am probably to much of a novice to understand which to choose. Thanks in advance, Fredrik Israelsson