On 10/23/24 11:25, Matt Fleming wrote: > On Wed, Oct 23, 2024 at 8:35 AM Vlastimil Babka <vbabka@xxxxxxx> wrote: >> >> I thought the alloc demand is only blocked on the pessimistic watermark >> calculation. Usable free pages exist, but the allocation is not allowed to >> use them. > > I'm confused -- I thought the problem was the inverse of your > statement: the allocation is attempted because > __zone_watermark_unusable_free() claims the highatomic pages are free > but they're not? AFAICS the fix is about GFP_HIGHUSER_MOVABLE allocation, so not eligible for highatomic reserves. Thus the watermark check in __zone_watermark_unusable_free() will add z->nr_reserved_highatomic as unusable_free, which is then subtracted from actual NR_FREE_PAGES. But since there are little or no actual free highatomic pages within the NR_FREE_PAGES, we're subtracting more than we should and this makes the watermark check very pessimistic and likely to fail. So the allocation is denied even if it would find many non-highatomic pages to allocate, and above the watermark. The problem you describe would apply to a highatomic allocation. Which would then try to reserve more, but maybe conclude we already have too many reserved, and not reserve anything. But highatomic pageblocks that are already full don't really contribute to that reserve anymore, so it would be better to stop marking and counting them as highatomic, and instead allow new ones to be reserved. So I think both kinds of allocations (highatomic or not) are losing here due to full highatomic pageblocks.