Matt Silverlock <matt@xxxxxxxxxxxxxxxxxx> writes: > Hi all. This might be tricky in so much as thereâ??s a few moving parts (when isnâ??t there?), but Iâ??ve tried to test the postgres side as much as possible. > Trying to work out a potential database bottleneck with a HTTP application (written in Go): > Pages that render HTML templates but donâ??t perform DB queries can hit ~36k+ req/s > Pages that perform a SELECT on a single row net about ~6.6k req/s: db.Get(l, "SELECT * FROM listings WHERE id = $1 AND expiry_date > current_date", l.Id) > Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to top 1.3k req/s You don't show us exactly what you're doing with OFFSET/LIMIT, but I'm going to guess that you're using it to paginate large query results. That's basically always going to suck: Postgres has no way to implement OFFSET except to generate and then throw away that number of initial rows. If you do the same query over again N times with different OFFSETs, it's going to cost you N times as much as the base query would. If the application's interaction with the database is stateless then you may not have much choice, but if you do have a choice I'd suggest doing pagination by means of fetching from a cursor rather than independent queries. regards, tom lane