On Mon, 2006-06-19 at 20:09 -0400, Brian Hurt wrote: > 5) The performance of Postgres, at least on inserts, depends critically > on how you program it. One the same hardware, performance for me varied > over a factor of over 300-fold, 2.5 orders of magnitude. Programs which > are unaware of transactions and are designed to be highly portable are > likely to hit the abysmal side of performance, where the transaction > overhead kills performance. I'm quite interested in this comment. Transactions have always been part of the SQL standard, so being unaware of them when using SQL is strange to me. Can you talk more about what your expectations of what performance "should have been" - I don't want to flame you, just to understand that viewpoint. What are you implicitly comparing against? With which options enabled? How are you submitting these SQL statements? Through what API? > I'm not sure there is a fix for this (let > alone an easy fix)- simply dropping transactions is obviously not it. I'd like to see what other "fixes" we might think of. Perhaps we might consider a session-level mode that groups together atomic INSERTs into the same table into a single larger transaction. That might be something we can do at the client level, for example. > Programs that are transaction aware and willing to use > PostgreSQL-specific features can get surprisingly excellent > performance. Simply being transaction-aware and doing multiple inserts > per transaction greatly increases performance, giving an easy order of > magnitude increase (wrapping 10 inserts in a transaction gives a 10x > performance boost). This is exactly the same as most other transactional-RDBMS. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com