On Sun, Nov 23, 2008 at 12:32 AM, Alvaro Herrera <alvherre@xxxxxxxxxxxxxxxxx> wrote: > >> On 21 Lis, 13:50, ciprian.crac...@xxxxxxxxx ("Ciprian Dorin Craciun") >> wrote: > >> > What have I observed / tried: >> > * I've tested without the primary key and the index, and the >> > results were the best for inserts (600k inserts / s), but the >> > readings, worked extremly slow (due to the lack of indexing); >> > * with only the index (or only the primary key) the insert rate is >> > good at start (for the first 2 million readings), but then drops to >> > about 200 inserts / s; > > I didn't read the thread so I don't know if this was suggested already: > bulk index creation is a lot faster than retail index inserts. Maybe > one thing you could try is to have an unindexed table to do the inserts, > and a separate table that you periodically truncate, refill with the > contents from the other table, then create index. Two main problems: 1. > querying during the truncate/refill/reindex process (you can solve it by > having a second table that you "rename in place"); 2. the query table is > almost always out of date. > > -- > Alvaro Herrera http://www.CommandPrompt.com/ > The PostgreSQL Company - Command Prompt, Inc. > > -- > Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general The concerts you have listed are very important to me... I will use the database not only for archival and offline analysis, but also for realtime queries (like what is the power consumption in the last minute)... Of course I could use Postgres only for archival like you've said, and some other solution for realtime queries, but this adds complexity to the application... Thanks, Ciprian Craciun. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general