Hello, Steve
--
// Dmitriy.
2014-09-10 21:08 GMT+04:00 Steve Atkins <steve@xxxxxxxxxxx>:
That's going to require you to have one database connection open for each
On Sep 10, 2014, at 12:16 AM, Dmitriy Igrishin <dmitigr@xxxxxxxxx> wrote:
> Hello, David
>
> 2014-09-10 4:31 GMT+04:00 David Boreham <david_list@xxxxxxxxxxx>:
> Hi Dmitriy, are you able to say a little about what's driving your quest for async http-to-pg ?
> I'm curious as to the motivations, and whether they match up with some of my own reasons for wanting to use low-thread-count solutions.
> For many web projects I consider Postgres as a development platform. Thus,
> I prefer to keep the business logic (data integrity trigger functions and
> API functions) in the database. Because of nature of the Web, many concurrent
> clients can request a site and I want to serve maximum possible of them with
> minimal overhead. Also I want to avoid a complex solutions. So, I believe that
> with asynchronous solution it's possible to *stream* the data from the database
> to the maximum number of clients (which possible can request my site over a
> slow connection).
client. If the client is over a slow connection it'll keep the database connection
open far longer than is needed, (compared to the usual "pull data from the
database as fast as the disks will go, then spoonfeed it out to the slow client"
approach). Requiring a live database backend for every open client connection
doesn't seem like a good idea if you're supporting many slow concurrent clients.
Good point. Thus, some of caching on the HTTP server side should be implemented
then.
// Dmitriy.