On Fri, 5 Jul 2024 at 12:50, James Pang <jamespang886@xxxxxxxxx> wrote: > we have a daily vacuumdb and analyze job, generally speaking it's done in seconds, sometimes it suddenly running more than tens of minutes with same bind variable values and huge temp space got used and at that time, explain show "Hash Anti join, Hash Right join" with seq scan two tables. There was talk about adding costing for run-time partition pruning factors but nothing was ever agreed, so nothing was done. It's just not that obvious to me how we'd do that. If the Append had 10 partitions as subnodes, with an equality join condition, you could assume we'll only match to 1 of those 10, but we've no idea at plan time which one that'll be and the partitions might drastically vary in size. The best I think we could do is take the total cost of those 10 and divide by 10 to get the average cost. It's much harder for range conditions as those could match anything from 0 to all partitions. The best suggestion I saw for that was to multiply the costs by DEFAULT_INEQ_SEL. I think for now, you might want to lower the random_page_cost or increase effective_cache_size to encourage the nested loop -> index scan plan. Good ranges for effective_cache_size is anywhere between 50 - 75% of your servers's RAM. However, that might not be ideal if your server is under memory pressure from other running processes. It also depends on how large shared_buffers are as a percentage of total RAM. David