On 2024/12/9 17:42, Zhenhua Huang wrote:
To perform memory hotplug operations, the memmap (aka struct page) will be updated. For arm64 with 4K page size, the typical granularity is 128M, which corresponds to a 2M memmap buffer. Commit c1cc1552616d ("arm64: MMU initialisation") optimizes this 2M buffer to be mapped with one single PMD entry. However, commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug") which supports 2M subsection hotplug granularity, causes other issues (refer to the change log of patch #1). The logic is adjusted to populate with huge pages only if the hotplug address/size is section-aligned.
Could any expert help review please?
Changes since v1: - Modified change log to make it more clear which was based on Catalin's comments. Zhenhua Huang (2): arm64: mm: vmemmap populate to page level if not section aligned arm64: mm: implement vmemmap_check_pmd for arm64 arch/arm64/mm/mmu.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)