On Tue, Oct 26, 2010 at 7:44 AM, Divakar Singh <dpsmails@xxxxxxxxx> wrote: > Hi Merlin, > Thanks for your quick input. > Well 1 difference worth mentioning: > I am inserting each row in a separate transaction, due to design of my > program. Well, that right there is going to define your application performance. You have basically three major issues -- postgresql executes each query synchronously through the protocol, transaction overhead, and i/o issues coming from per transaction sync. libpq supports asynchronous queries, but only from the clients point of view -- so that this only helps if you have non trivial work to do setting up each query. The database is inherently capable of doing what you want it to do...you may just have to rethink certain things if you want to unlock the true power of postgres... You have several broad areas of attack: *) client side: use prepared queries (PQexecPrepared) possibly asynchronously (PQsendPrepared). Reasonably you can expect 5-50% speedup if not i/o bound *) Stage data to a temp table: temp tables are not wal logged or synced. Periodically they can be flushed to a permanent table. Possible data loss *) Relax sync policy (synchronous_commit/fsync) -- be advised these settings are dangerous *) Multiple client writers -- as long as you are not i/o bound, you will see big improvements in tps from multiple clients *) Stage/queue application data before inserting it -- requires retooling application, but you can see orders of magnitude jump insert performance merlin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance