Hi, our application is using Postgres in a rather unusuall way. It is used by a GUI application to store several hundred thousand 'parameters'. Basically it is used like a big INI file. There are about 50 tables with various parameters. The application typicall goes like this select id, child_tb_key_id, <fields with parms> from tb1 then for each selected row above select from the child table do a select (like the above) and so on -- many levels deep I know that it is not a proper way to use SQL Instead we should be selecting many rows at once, joining them/etc But it is what it is now... Queries are very fast though, Postgres reports that the all the queries for a typical 'load' operation take 0.8 seconds -- however overall time that the GUI user perceives is 8 seconds. Out of that 8 seconds a big chunk is in the sending of the SQL statements/receiving results back -- just network traffic, parsing/etc There are total about 2400 queries that happen in that period of time (just selects) I am trying to figure out how can I optimize PG configuration to suite such a contrived deployment of Postgres. For example, we do not mind PG running on the same machine as the Client app (it is connected via Qt Sql Pg plugin (so it uses Pg native access library underneath). Are there any optmization can be done for that? Also this is a 'single' client/single connection system what optimizations can be done for that? and finally since most of the queries are very quick index-based selects what can be done to optimize the traffic between pg and the client? thank you in advance for any recommendations/pointers. -- Vlad P author of C++ ORM http://github.com/vladp/CppOrm/tree/master -- http://www.fastmail.fm - Send your email first class -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general