Hey folks,
I noticed something weird, and not sure if this is the expected behaviour or not in PostgreSQL.
So I am running Benchbase (a benchmark framework) with 50 terminals (50 concurrent connections).
I noticed something weird, and not sure if this is the expected behaviour or not in PostgreSQL.
So I am running Benchbase (a benchmark framework) with 50 terminals (50 concurrent connections).
There are 2-3 additional connections, one for a postgres-exporter container for example.
So far so good, and with a `max_connections` at 100 there is no problem. What happens is that if I execute manually `VACUUM FULL` the connections are exhausted.
Also tried this with 150 `max_connections` to see if it just “doubles” the current connections, but as it turned out, it still exhausted all the connections until it reached `max_connections`.
This was cross-checked, as the postgres-exporter could not connect, and I manually was not allowed to connect with `psql`.
Is this expected or is this a bug?
postgres-exporter logs:
```
sql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: sorry, too many clients already
```
So far so good, and with a `max_connections` at 100 there is no problem. What happens is that if I execute manually `VACUUM FULL` the connections are exhausted.
Also tried this with 150 `max_connections` to see if it just “doubles” the current connections, but as it turned out, it still exhausted all the connections until it reached `max_connections`.
This was cross-checked, as the postgres-exporter could not connect, and I manually was not allowed to connect with `psql`.
Is this expected or is this a bug?
postgres-exporter logs:
```
sql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: sorry, too many clients already
```