sergio mayoral <smayoral@xxxxxxxxx> writes: > i am using libpq library and postgresql 8.4 for my linux application running > on ARM with 256 MB. I am just doing: > PQconnectdb(); > PQexec(INSERT INTO table1 ....); (0.009661 sec.) > PQexec(INSERT INTO table1 ....); (0.004208 sec.) > PQexec(INSERT INTO table2 ....); (0.007352 sec.) > PQexec(INSERT INTO table2 ....); (0.002533 sec.) > PQexec(INSERT INTO table2 ....); (0.002281 sec.) > PQexec(INSERT INTO table2 ....); (0.002244 sec.) > PQexec(INSERT INTO table3 ....); (0.006903 sec.) > PQexec(INSERT INTO table3 ....); (0.002903 sec.) > PQfinnish(); > I check the time for each PQexec with gettimeofday function and I always see > that the first INSERT for each table needs longer than the next ones. The first few commands of *any* type on a new connection are going to take longer than repeat versions of those commands, because the backend needs to load up its internal caches. Once it's cached information about the tables, operators, etc that you are working with, it's a bit faster. This isn't specific to INSERT. > this must be something with the parser stage and since i am doing every time > the same queries, I would like to know if there is a way to cache this > queries in order to speed up the first INSERTs. The only "fix" is to hang onto your connections longer. Consider a connection pooler. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance