Thanks for the information. I perform benchmarking on a very simple table, on local database. (1 table, 2 fields with 1 is bigserial, another is text) ==================================================================== INSERT INTO measurement_type(measurement_type_name) VALUES ('Hello') ==================================================================== I turn synchronous_commit to off. To write a single row(local database), the time taken is in between 0.1ms and 0.5ms I try to compare this with flat text file. To write a single row(file), the time taken is in between 0.005ms and 0.05ms The different is big. Is this the expected result? Are you guys also getting the similar result? I know there shall be some overhead to write to database compared to flat text file. (network communication, interpretation of SQL statement...) However, Is there any way to further improve so that PostgreSQL write performance is near to file? If not, I need to plan another strategy, to migrate my flat text file system, into PostgreSQL system smoothly. Thanks and Regards Yan Cheng CHEOK --- On Tue, 1/5/10, Craig Ringer <craig@xxxxxxxxxxxxxxxxxxxxx> wrote: > From: Craig Ringer <craig@xxxxxxxxxxxxxxxxxxxxx> > Subject: Re: PostgreSQL Write Performance > To: "Yan Cheng Cheok" <yccheok@xxxxxxxxx> > Cc: "Dann Corbit" <DCorbit@xxxxxxxxx>, pgsql-general@xxxxxxxxxxxxxx > Date: Tuesday, January 5, 2010, 7:20 PM > On 5/01/2010 3:30 PM, Yan Cheng Cheok > wrote: > >>> What is the actual problem you are trying to > solve? > > > > I am currently developing a database system for a high > speed measurement machine. > > > > The time taken to perform measurement per unit is in > term of ~30 milliseconds. We need to record down the > measurement result for every single unit. Hence, the time > taken by record down the measurement result shall be far > more less than milliseconds, so that it will have nearly 0 > impact on the machine speed (If not, machine need to wait > for database to finish writing, before performing > measurement on next unit) > > The commit_delay and synchronous_commit pararmeters may > help you if you want to do each insert as a separate > transaction. Note that with these parameters there's some > risk of very recently committed data being lost if the > server OS crashes or the server hardware is powered > off/power-cycled unexpectedly. PostgreSQL its self crashing > shouldn't cause loss of the committed data, though. > > Alternately, you can accumulate small batches of > measurements in your app and do multi-valued INSERTs once > you have a few (say 10) collected up. You'd have to be > prepared to lose those if the app crashed though. > > Another option is to continue using your flat file, and > have a "reader" process tailing the flat file and inserting > new records into the database as they become available in > the flat file. The reader could batch inserts intelligently, > keep a record on disk of its progress, rotate the flat file > periodically, etc. > > -- > Craig Ringer > > -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general