2011/1/6 Вячеслав Блинников <slavmfm@xxxxxxxxx>: > 1: Didn't figured out what it does mean - can you explain it better? http://ru.wikipedia.org/wiki/%D0%A1%D0%B5%D0%BC%D0%B0%D1%84%D0%BE%D1%80_(%D0%B8%D0%BD%D1%84%D0%BE%D1%80%D0%BC%D0%B0%D1%82%D0%B8%D0%BA%D0%B0) > 2: Operation system will refuse me to create thousand threads and, anyway, > database will return responds averagely just when all of them will be > accomplished. I don't know how I can help you, since you haven't explained the architecture of your application very well. > 3: I never close a connection once it was created, so any pool will not help > me (I google says right about "connection pool"). Maybe you should. > Problem can be observed from this abstract point of view: > Transferring data from application server (which connects to the database) > takes 200 ms (and the same amount to transfer backward); adding data to the > database and then selecting data (all in one request) from it takes 250 ms, > so each database operation (of such type) will take 200 + 250 + 200 = 650 > ms. Two such operations will take 650 + 650 = 1300 ms, but if there existed > the way to send two queries at once and then get two results at once (of > course tracking the correspondence between requests/responds) we could > decrease such two "dialogs" from 1300 ms to 200 + 250 + 250 + 200 = 900 ms. > So, we win 400 ms - when there are thousand requests per several minutes - > it bocomes to be a very good time. Databases are optimized for throughput, not latency. It isn't in question that there would be less latency if we could parallelise the queries. What is in question is: 1. Whether or not it matters. 2. Whether or not that's possible, given the restrictions you insist on. -- Regards, Peter Geoghegan -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general