Default settings might be too low. Try to set work_mem higher. If
operations like sorting don't fit into work_mem, temporary files
are used.
Hello all,
We have a small Google Cloud SQL PostgreSQL 11 instance.
The instance is configured with 8G of memory and 4 vCPUs, work_mem is 4MB.
I was wondering why the instance uses disk temp files for such a small amount of data (see size in the log below)?
I suppose that it's not only the work_mem limit that could trigger disk temp file creation or the reported SQL statement is not the only one that contributes to it:
I 2021-06-22T09:12:59.164913Z 2021-06-22 09:12:59.163 UTC [1957798]: [2-1] db=<db>,user=<user> STATEMENT: SELECT c1, c2 FROM schema1.t1 WHERE c1 >= $1 ORDER BY c1
I 2021-06-22T09:12:59.164379Z 2021-06-22 09:12:59.163 UTC [1957798]: [1-1] db=<db>,user=<user> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp1957798.0", size 7380992The t1 table (table and column names have been masked for privacy) occupies only 440 kB and has 160 records.
Best regards
Dimitre
-- Holger Jakobs, Bergisch Gladbach, Tel. +49-178-9759012
Attachment:
OpenPGP_signature
Description: OpenPGP digital signature