On Thu, Aug 8, 2024 at 5:18 AM Costa Alexoglou <costa@xxxxxxxxxx> wrote:
Hey folks,
I noticed something weird, and not sure if this is the expected behaviour or not in PostgreSQL.
So I am running Benchbase (a benchmark framework) with 50 terminals (50 concurrent connections).There are 2-3 additional connections, one for a postgres-exporter container for example.
So far so good, and with a `max_connections` at 100 there is no problem. What happens is that if I execute manually `VACUUM FULL`
Off-topic, but... WHY?? It almost certainly does not do what you think it does. Especially if it's just "VACUUM FULL;"
the connections are exhausted.
Connect to the relevant database and run this query. Don't disconnect, and keep running it over and over again as you run the "VACUUM FULL;". That'll tell you exactly what happens.
select pid
,datname as db
,application_name as app_name
,case
when client_hostname is not null then client_hostname
else client_addr::text
end AS client_name
,usename
,to_char((EXTRACT(epoch FROM now() - backend_start))/60.0, '99,999.00') as backend_min
,to_char(query_start, 'YYYY-MM-DD HH24:MI:SS.MS') as "Query Start"
,to_char((EXTRACT(epoch FROM now() - query_start))/60.0, '99,999.00') as qry_min
,to_char(xact_start, 'YYYY-MM-DD HH24:MI:SS.MS') as "Txn Start"
,to_char((EXTRACT(epoch FROM now() - xact_start)/60.0), '999.00') as txn_min
,state
query
from pg_stat_activity
WHERE pid != pg_backend_pid()
order by 6 desc;
,application_name as app_name
,case
when client_hostname is not null then client_hostname
else client_addr::text
end AS client_name
,usename
,to_char((EXTRACT(epoch FROM now() - backend_start))/60.0, '99,999.00') as backend_min
,to_char(query_start, 'YYYY-MM-DD HH24:MI:SS.MS') as "Query Start"
,to_char((EXTRACT(epoch FROM now() - query_start))/60.0, '99,999.00') as qry_min
,to_char(xact_start, 'YYYY-MM-DD HH24:MI:SS.MS') as "Txn Start"
,to_char((EXTRACT(epoch FROM now() - xact_start)/60.0), '999.00') as txn_min
,state
query
from pg_stat_activity
WHERE pid != pg_backend_pid()
order by 6 desc;
Also tried this with 150 `max_connections` to see if it just “doubles” the current connections, but as it turned out, it still exhausted all the connections until it reached `max_connections`.
Double it again?
This was cross-checked, as the postgres-exporter could not connect, and I manually was not allowed to connect with `psql`.
Is this expected or is this a bug?
Depends on what you set these to:
autovacuum_max_workers
max_parallel_maintenance_workers
max_parallel_workers
max_parallel_workers_per_gather
max_worker_processes
max_parallel_maintenance_workers
max_parallel_workers
max_parallel_workers_per_gather
max_worker_processes
Death to America, and butter sauce!
Iraq lobster...