On Mon, 2020-10-26 at 11:15 +0200, Mike Rapoport wrote: > On Mon, Oct 26, 2020 at 12:38:32AM +0000, Edgecombe, Rick P wrote: > > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote: > > > From: Mike Rapoport <rppt@xxxxxxxxxxxxx> > > > > > > When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page > > > may > > > be > > > not present in the direct map and has to be explicitly mapped > > > before > > > it > > > could be copied. > > > > > > On arm64 it is possible that a page would be removed from the > > > direct > > > map > > > using set_direct_map_invalid_noflush() but __kernel_map_pages() > > > will > > > refuse > > > to map this page back if DEBUG_PAGEALLOC is disabled. > > > > It looks to me that arm64 __kernel_map_pages() will still attempt > > to > > map it if rodata_full is true, how does this happen? > > Unless I misread the code, arm64 requires both rodata_full and > debug_pagealloc_enabled() to be true for __kernel_map_pages() to do > anything. > But rodata_full condition applies to set_direct_map_*_noflush() as > well, > so with !rodata_full the linear map won't be ever changed. Hmm, looks to me that __kernel_map_pages() will only skip it if both debug pagealloc and rodata_full are false. But now I'm wondering if maybe we could simplify things by just moving the hibernate unmapped page logic off of the direct map. On x86, text_poke() used to use this reserved fixmap pte thing that it could rely on to remap memory with. If hibernate had some separate pte for remapping like that, then we could not have any direct map restrictions caused by it/kernel_map_pages(), and it wouldn't have to worry about relying on anything else.