On Thu, Jun 12, 2014 at 9:58 AM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote: > Matt Silverlock <matt@xxxxxxxxxxxxxxxxxx> writes: >> Hi all. This might be tricky in so much as there’s a few moving parts (when isn’t there?), but I’ve tried to test the postgres side as much as possible. >> Trying to work out a potential database bottleneck with a HTTP application (written in Go): >> Pages that render HTML templates but don’t perform DB queries can hit ~36k+ req/s >> Pages that perform a SELECT on a single row net about ~6.6k req/s: db.Get(l, "SELECT * FROM listings WHERE id = $1 AND expiry_date > current_date", l.Id) >> Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to top 1.3k req/s > > You don't show us exactly what you're doing with OFFSET/LIMIT, but I'm > going to guess that you're using it to paginate large query results. > That's basically always going to suck: Postgres has no way to implement > OFFSET except to generate and then throw away that number of initial rows. > If you do the same query over again N times with different OFFSETs, it's > going to cost you N times as much as the base query would. > > If the application's interaction with the database is stateless then you > may not have much choice, but if you do have a choice I'd suggest doing > pagination by means of fetching from a cursor rather than independent > queries. Well, you can also do client side pagination using the row-wise comparison feature, implemented by you :-). Cursors can be the best approach, but it's nice to know the client side approach if you're really stateless and/or want to be able to pick up external changes during the browse. SELECT * FROM listings WHERE (id, expiry_date) > (last_id_read, last_expiry_date_read) ORDER BY id, expiry_date LIMIT x. then you just save off the highest id, date pair and feed it back into the query. This technique is usefui for emulating ISAM browse operations. merlin