Hi community,
I'm looking at a "SELECT * FROM pg_stat_statements" output and am
puzzled by the huge differences between min/max_exec_time even for
simple queries.
The most extreme example is probably the statement used by the
application's connection health check:
SELECT 1
min=0.001, mean=0.00386, max=36.812
Other statements with huge variances include:
SET application_name=$1
min=0.002, mean=0.005, max=9.177
SELECT * FROM table WHERE id=$1 (where ID is the primary key column;
table has 0.5M rows and is frequently vacuum analyzed)
min=0.010, mean=0.260, max=12338.665
According to the system's monitoring, there is no pressure on any
resource (cpu/mem/io). It's 13.5-2pgdg20.04+1 on Ubuntu 20.4; the VM has
12 cpus/16GB memory, ceph-based SSD storage (latency ~1.5ms), and runs
on max_connections=100 with usually 25-40 processes being connected.
Is this to be expected?
Is there something I can watch out or monitor for?
Thank you for any insights...
Best regards
-hannes