Adding James here. + James Morse <james.morse@xxxxxxx> On 4/7/21 10:56 PM, Mike Rapoport wrote: > From: Mike Rapoport <rppt@xxxxxxxxxxxxx> > > Hi, > > These patches aim to remove CONFIG_HOLES_IN_ZONE and essentially hardwire > pfn_valid_within() to 1. That would be really great for arm64 platform as it will save CPU cycles on many generic MM paths, given that our pfn_valid() has been expensive. > > The idea is to mark NOMAP pages as reserved in the memory map and restore Though I am not really sure, would that possibly be problematic for UEFI/EFI use cases as it might have just treated them as normal struct pages till now. > the intended semantics of pfn_valid() to designate availability of struct > page for a pfn. Right, that would be better as the current semantics is not ideal. > > With this the core mm will be able to cope with the fact that it cannot use > NOMAP pages and the holes created by NOMAP ranges within MAX_ORDER blocks > will be treated correctly even without the need for pfn_valid_within. > > The patches are only boot tested on qemu-system-aarch64 so I'd really > appreciate memory stress tests on real hardware. Did some preliminary memory stress tests on a guest with portions of memory marked as MEMBLOCK_NOMAP and did not find any obvious problem. But this might require some testing on real UEFI environment with firmware using MEMBLOCK_NOMAP memory to make sure that changing these struct pages to PageReserved() is safe. > > If this actually works we'll be one step closer to drop custom pfn_valid() > on arm64 altogether. Right, planning to rework and respin the RFC originally sent last month. https://patchwork.kernel.org/project/linux-mm/patch/1615174073-10520-1-git-send-email-anshuman.khandual@xxxxxxx/ _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm