Re: Insert performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi and thanks for your quick answer :-)

> > 
> >>> Here is my problem. With some heavy insert into a simple BD (one
> >>> table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
> >>> testing it using a simple C software which use libpq and which use:
> >>> - Insert prepared statement (to avoid too many request parsing on the
> >>> server)
> >>> - transaction of 100000 inserts
> >> Are each of the INSERTs in their own transaction?
> >>
> > 
> > No, as said above transactions are made of 100000 inserts...
> 
> Hmm - I read that as just meaning "inserted 100000 rows". You might find 
> that smaller batches provide peak performance.
> 

Ahh ok ;-) sorry for my bad english... (yeah, i have been testing
several transaction size 10000, 20000 and 100000)


> >> If so, you'll be limited by the speed of the disk the WAL is running on.
> >>
> >> That means you have two main options:
> >> 1. Have multiple connections inserting simultaneously.
> > 
> > Yes, you're right. That what i have been testing and what provide the
> > best performance ! I saw that postgresql frontend was using a lot of CPU
> > and not both of them (i'm using a pentium D, dual core). To the opposit,
> > the postmaster process use not much resources. Using several client,
> > both CPU are used and i saw an increase of performance (about 18000
> > inserts/sec).
> > 
> > So i think my bottle neck is more the CPU speed than the disk speed,
> > what do you think ?
> 
> Well, I think it's fair to say it's not disk. Let's see - the original 
> figure was 8000 inserts/sec, which is 0.125ms per insert. That sounds 
> plausible to me for a round-trip to process a simple command - are you 
> running the client app on the same machine, or is it over the network?

I did both test. On the local machine (using UNIX sockets) i can reach
18000 insert/sec with 10 clients and prepared statements. The same test
using clients on the remote machine provide me 13000 inserts/sec.

Now, with multiple client (multi-threaded inserts) my both CPU are quite
well used (both arround 90%) so i maybe think that disk speeds are now
my bottleneck. What do you think ?  or maybe i will need a better CPU ?

> 
> Two other things to bear in mind:
> 1. If you're running 8.2 you can have multiple sets of values in an INSERT
> http://www.postgresql.org/docs/8.2/static/sql-insert.html
> 

Yeah, i'm running the 8.2.3 version ! i didn't know about multiple
inserts sets ! Thanks for the tip ;-)

> 2. You can do a COPY from libpq - is it really not possible?
> 

Not really but i have been testing it and inserts are flying (about
100000 inserts/sec) !!





[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux