joël Winteregg wrote:
Hi Richard,
Here is my problem. With some heavy insert into a simple BD (one
table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
testing it using a simple C software which use libpq and which use:
- Insert prepared statement (to avoid too many request parsing on the
server)
- transaction of 100000 inserts
Are each of the INSERTs in their own transaction?
No, as said above transactions are made of 100000 inserts...
Hmm - I read that as just meaning "inserted 100000 rows". You might find
that smaller batches provide peak performance.
If so, you'll be limited by the speed of the disk the WAL is running on.
That means you have two main options:
1. Have multiple connections inserting simultaneously.
Yes, you're right. That what i have been testing and what provide the
best performance ! I saw that postgresql frontend was using a lot of CPU
and not both of them (i'm using a pentium D, dual core). To the opposit,
the postmaster process use not much resources. Using several client,
both CPU are used and i saw an increase of performance (about 18000
inserts/sec).
So i think my bottle neck is more the CPU speed than the disk speed,
what do you think ?
Well, I think it's fair to say it's not disk. Let's see - the original
figure was 8000 inserts/sec, which is 0.125ms per insert. That sounds
plausible to me for a round-trip to process a simple command - are you
running the client app on the same machine, or is it over the network?
Two other things to bear in mind:
1. If you're running 8.2 you can have multiple sets of values in an INSERT
http://www.postgresql.org/docs/8.2/static/sql-insert.html
2. You can do a COPY from libpq - is it really not possible?
--
Richard Huxton
Archonet Ltd