Re: Fwd: temp_file_limit?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tom and Thomas, thanks for your help.

@Tom:
If it really is per-process then I would have expected it to die after 200GB was used?
As far as "valid bug" is concerned: I had hoped this would be per session, as this at least delivers a reasonable and usable limit; it is easy to control the number of sessions/statements in execution.
If it really is per process then the limit is not really useful, just like work_mem: the execution plan of a query determines the number of processes (up to the max, at least that is way better than work_mem) and that can change whenever Postgres feels a new plan is in order. I can understand that solving this might be harder (although to me it looks like just a little bit of shared memory per session to keep a number). To me it does not sound like an invalid bug, just one you do not want to solve now ;) And the real problem, for me, is actually that both work_mem and temp_file_limit should be for the entire instance/cluster ;) I know that that is even harder.

For us it means we really cannot use Postgres parallelism: it is infinitely better to have a query that runs longer but which finishes than to have the database die and recover randomly with OOM or with disk space filling up killing random queries. Which is a bit of a pity, ofc.

@Justin
The test is running on Ubuntu 22.04.1, x86_64, the disk is an NVMe 2TB WD850X with ext4 as a file system.
I will collect the other data around tmpfiles hopefully later today.
I have already set max_parallel_workers_per_gather to 1. I will probably disable all parallelism for the next runs to see whether that makes the size limit more workable..


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux