Re: Fwd: temp_file_limit?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 19, 2022 at 06:27:57PM +0100, Frits Jalvingh wrote:
> I have listed the files during that run,

> 213M -rw------- 1 postgres postgres 213M dec 19 17:46 i100of128.p0.0
> 207M -rw------- 1 postgres postgres 207M dec 19 17:46 i100of128.p1.0
> 210M -rw------- 1 postgres postgres 210M dec 19 17:49 i100of256.p0.0
> 211M -rw------- 1 postgres postgres 211M dec 19 17:49 i100of256.p1.0
> 188M -rw------- 1 postgres postgres 188M dec 19 17:53 i100of512.p0.0
[...]

I think that proves Thomas' theory.  I'm not sure how that helps you,
though...

On Mon, Dec 19, 2022 at 01:51:33PM +1300, Thomas Munro wrote:
> One possibility is that you've hit a case that needs several rounds of
> repartitioning (because of a failure to estimate the number of tuples
> well), but we can't see that because you didn't show EXPLAIN (ANALYZE)
> output (understandably if it runs out of disk space before
> completing...).  The parallel hash code doesn't free up the previous
> generations' temporary files; it really only needs two generations'
> worth concurrently (the one it's reading from and the one it's writing
> to).  In rare cases where more generations are needed it could unlink
> the older ones -- that hasn't been implemented yet.  If you set
> log_temp_files = 0 to log temporary file names, it should be clear if
> it's going through multiple rounds of repartitioning, from the names
> (...of32..., ...of64..., ...of128..., ...of256..., ...).

-- 
Justin Pryzby
System Administrator
Telsasoft
+1-952-707-8581





[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux