Re: [PATCH mm-unstable v1] mm/page_alloc: try not to overestimate free highatomic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+Cc Mel and Matt

On 10/21/24 19:25, Michal Hocko wrote:
> On Mon 21-10-24 11:10:50, Yu Zhao wrote:
>> On Mon, Oct 21, 2024 at 2:13 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
>> >
>> > On Sat 19-10-24 23:13:15, Yu Zhao wrote:
>> > > OOM kills due to vastly overestimated free highatomic reserves were
>> > > observed:
>> > >
>> > >   ... invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0 ...
>> > >   Node 0 Normal free:1482936kB boost:0kB min:410416kB low:739404kB high:1068392kB reserved_highatomic:1073152KB ...
>> > >   Node 0 Normal: 1292*4kB (ME) 1920*8kB (E) 383*16kB (UE) 220*32kB (ME) 340*64kB (E) 2155*128kB (UE) 3243*256kB (UE) 615*512kB (U) 1*1024kB (M) 0*2048kB 0*4096kB = 1477408kB
>> > >
>> > > The second line above shows that the OOM kill was due to the following
>> > > condition:
>> > >
>> > >   free (1482936kB) - reserved_highatomic (1073152kB) = 409784KB < min (410416kB)
>> > >
>> > > And the third line shows there were no free pages in any
>> > > MIGRATE_HIGHATOMIC pageblocks, which otherwise would show up as type
>> > > 'H'. Therefore __zone_watermark_unusable_free() overestimated free
>> > > highatomic reserves. IOW, it underestimated the usable free memory by
>> > > over 1GB, which resulted in the unnecessary OOM kill.
>> >
>> > Why doesn't unreserve_highatomic_pageblock deal with this situation?
>> 
>> The current behavior of unreserve_highatomic_pageblock() seems WAI to
>> me: it unreserves highatomic pageblocks that contain *free* pages so

Hm I don't think it's completely WAI. The intention is that we should be
able to unreserve the highatomic pageblocks before going OOM, and there
seems to be an unintended corner case that if the pageblocks are fully
exhausted, they are not reachable for unreserving. The nr_highatomic is then
also fully misleading as it prevents allocations due to a limit that does
not reflect reality. Your patch addresses the second issue, but there's a
cost to it when calculating the watermarks, and it would be better to
address the root issue instead.

>> that those pages can become usable to others. There is nothing to
>> unreserve when they have no free pages.

Yeah there are no actual free pages to unreserve, but unreserving would fix
the nr_highatomic overestimate and thus allow allocations to proceed.

> I do not follow. How can you have reserved highatomic pages of that size
> without having page blocks with free memory. In other words is this an
> accounting problem or reserves problem? This is not really clear from
> your description.

I think it's the problem of finding the highatomic pageblocks for
unreserving them once they become full. The proper fix is not exactly
trivial though. Either we'll have to scan for highatomic pageblocks in the
pageblock bitmap, or track them using an additional data structure.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux