Hi all. This might be tricky in so much as there’s a few moving parts (when isn’t there?), but I’ve tried to test the postgres side as much as possible. Trying to work out a potential database bottleneck with a HTTP application (written in Go):
Other pertinent details:
The application has a connection pool via the lib/pq driver (https://github.com/lib/pq) with MaxOpen set to 256 connections. Stack size is 8GB and max socket connections are set to 1024 (running out of FDs isn’t the problem here from what I can see). Relevant postgresql.conf settings — everything else should be default, including fsync/synchronous commits (on) for obvious reasons:
The query in question is: http://explain.depesz.com/s/7g8 and the table schema is as below:
The single row query has a query plan here: http://explain.depesz.com/s/1Np (this is where I see 6.6k req/s at the application level), Some pgbench results from this machine as well:
Ultimately I'm not expecting a miracle—database ops are nearly always the slowest part of a web server outside the latency to the client itself—but I'd expect something a little closer (even 10% of 33k would be a lot better). And of course, this is somewhat "academic" because I don't expect to see four million hits an hour—but I'd also like to catch problems for future reference. Thanks in advance. |