On Fri 21-04-23 12:47:06, Mike Rapoport wrote: > On Fri, Apr 21, 2023 at 11:05:20AM +0200, Michal Hocko wrote: > > Hi, > > > > On Wed 01-02-23 20:06:37, Mike Rapoport wrote: > > [...] > > > My current proposal is to have a cache of 2M pages close to the page > > > allocator and use a GFP flag to make allocation request use that cache. On > > > the free() path, the pages that are mapped at PTE level will be put into > > > that cache. > > > > Are there stil open questions which would benefit from a discussion at > > LSFMM this year? > > Yes, I believe. > > I was trying to get some numbers to see what would be the benefit of > __GFP_UNMAPPED and I couldn't find a benchmark that will produce results > with good signal-to-noise ratio. > > So while it seems that there's a general agreement on how to implement > caching of 2M pages, there is still no evidence that it will be universally > useful. > > It would be interesting to discuss the reasons for inconclusive results, > and more importantly, what should be the general direction for dealing with > the direct map fragmentation. > > As it seems now, packing code allocations into 2M pages would be an > improvement, while data allocations that fragment the direct map do not > impact much the overall system performance. > > I'll bring the mmtest results I have to begin the discussion. Makes sense. Thanks! -- Michal Hocko SUSE Labs