Re: Fwd: temp_file_limit?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Em seg., 19 de dez. de 2022 às 16:29, Frits Jalvingh <jal@xxxxxx> escreveu:
Ok, just to make sure that I understand correctly:
The parallel hash implementation needs to resize its table because of a mismatch in expected tuple count. I do expect this to be true: Postgres often grossly underestimates the expected row counts in our queries.
This is not fully implemented yet: removing the "old "files is not yet done, so every time the table resizes it creates a new set of files and the old ones remain.
I assume that the "used file size" only includes the "current" set of files, and that the old ones are not counted towards that amount? That would explain why it overallocates, of course.
It is not necessary what is happening.
Could you try manually deleting (rm) these files, using the postgres user?
It's an ugly and dirty test, but it could indicate that files are really being left behind, without being deleted by Postgres.

Alternatively, you could compile a version with
CHECK_WRITE_VS_EXTEND set, and try to fetch as much information from the logs as possible,
as has been indicated by others here.


By itself I now know what to do: I just need to disable all parallelism ( •̀ᴗ•́ )و ̑̑

I usually do that anyway because it makes queries die randomly. This is just another reason.

I restarted that query with max_parallel_workers_per_gather=0, and this does not seem to use tempspace at all. It was not exactly fast, it took 82 minutes of a single process running at 100% cpu. https://explain.depesz.com/s/HedE
Anyway, see the hint page (https://explain.depesz.com/s/HedE#hints),
maybe it will be useful.

regards,
Ranier Vilela

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux