Just to clarify: the size of the temp file is 7MB and with work_mem of 4MB it clearly doesn't fit.
But why an "order by" of a 440kB of data would require 7MB.
One possible reason is that that the number of records in that table varies significantly during the day, but - if that's the case - I would expect that the table would be much bigger than 440kB (size taken using \dt+).
Regards
Dimitre
Hello all,
We have a small Google Cloud SQL PostgreSQL 11 instance.
The instance is configured with 8G of memory and 4 vCPUs, work_mem is 4MB.
I was wondering why the instance uses disk temp files for such a small amount of data (see size in the log below)?
I suppose that it's not only the work_mem limit that could trigger disk temp file creation or the reported SQL statement is not the only one that contributes to it:
I 2021-06-22T09:12:59.164913Z 2021-06-22 09:12:59.163 UTC [1957798]: [2-1] db=<db>,user=<user> STATEMENT: SELECT c1, c2 FROM schema1.t1 WHERE c1 >= $1 ORDER BY c1
I 2021-06-22T09:12:59.164379Z 2021-06-22 09:12:59.163 UTC [1957798]: [1-1] db=<db>,user=<user> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp1957798.0", size 7380992The t1 table (table and column names have been masked for privacy) occupies only 440 kB and has 160 records.
Best regards
Dimitre