> On May 18, 2022, at 9:49 AM, Edgecombe, Rick P <rick.p.edgecombe@xxxxxxxxx> wrote: > > On Wed, 2022-05-18 at 06:34 +0000, Song Liu wrote: >>>> I am not quite sure the exact work needed here. Rick, would you >>>> have >>>> time to enable VM_FLUSH_RESET_PERMS for huge pages? Given the >>>> merge >>>> window is coming soon, I guess we need current work around in >>>> 5.19. >>> >>> I would have hard time squeezing that in now. The vmalloc part is >>> easy, >>> I think I already posted a diff. But first hibernate needs to be >>> changed to not care about direct map page sizes. >> >> I guess I missed the diff, could you please send a link to it? > > > https://lore.kernel.org/lkml/5bd16e2c06a2df357400556c6ae01bb5d3c5c32a.camel@xxxxxxxxx/ > > The remaining problem is that hibernate may encounter NP pages when > saving memory to disk. It resets them with CPA calls 4k at a time. So > if a page is NP, hibernate needs it to be already be 4k or it might > need to split. I think hibernate should just utilize a different > mapping to get at the page when it encounters this rare scenario. In > that diff I put some locking so that hibernate couldn't race with a > huge NP page, but then I thought we should just change hibernate. I am not quite sure how to test the hibernate path. Given the merge window is coming soon, how about we ship this patch in 5.19, and fix VM_FLUSH_RESET_PERMS in a later release? > >> >>> >>>> >>>>> >>>>>> >> >>> >>> I'm also not clear why we wouldn't want to use the prog pack >>> allocator >>> even if vmalloc huge pages was disabled. Doesn't it improve >>> performance >>> even with small page sizes, per your benchmarks? What is the >>> downside >>> to just always using it? >> >> With current version, when huge page is disabled, the prog pack >> allocator >> will use 4kB pages for each pack. We still get about 0.5% performance >> improvement with 4kB prog packs. > > Oh, I thought you were comparing a 2MB sized, small page mapped > allocation to a 2MB sized, huge page mapped allocation. > > It looks like the logic is to free a pack if it is empty, so then for > smaller packs you are more likely to let the pages go back to the page > allocator. Then future allocations would break more pages. This is correct. This is the current version we have with 5.18-rc7. > > So I think that is not a fully apples to apples test of huge mapping > benefits. I'd be surprised if there really was no huge mapping benefit, > since its been seen with core kernel text. Did you notice if the direct > map breakage was different between the tests? I didn’t check specifically, but it is expected that the 4kB prog pack will cause more direct map breakage. Thanks, Song