Hannes Erven <hannes@xxxxxxxx> writes: > I'm looking at a "SELECT * FROM pg_stat_statements" output and am > puzzled by the huge differences between min/max_exec_time even for > simple queries. > The most extreme example is probably the statement used by the > application's connection health check: > SELECT 1 > min=0.001, mean=0.00386, max=36.812 Am I right in guessing that that's the first/only command issued in its connection? If so, there would be catalog cache population overhead involved here, which might explain some of the variance. (If some but not all instances are first commands, that'd definitely help explain the variance. PG backends are fairly heavyweight objects, and take a bit to come up to speed.) > Other statements with huge variances include: > SET application_name=$1 > min=0.002, mean=0.005, max=9.177 Could be same issue. > SELECT * FROM table WHERE id=$1 (where ID is the primary key column; > table has 0.5M rows and is frequently vacuum analyzed) > min=0.010, mean=0.260, max=12338.665 This, on the other hand, seems odd. Maybe the query sometimes gets blocked on a lock? Enabling log_lock_waits might help you check. > According to the system's monitoring, there is no pressure on any > resource (cpu/mem/io). It's 13.5-2pgdg20.04+1 on Ubuntu 20.4; the VM has > 12 cpus/16GB memory, ceph-based SSD storage (latency ~1.5ms), and runs > on max_connections=100 with usually 25-40 processes being connected. Hmm ... if you're sitting on a VM rather than directly on the iron, then there's a whole bunch of other potential reasons for irregular performance, most of which you probably can't see from inside the VM. regards, tom lane