On Sun, Jun 26, 2022 at 10:57:17PM +0800, Qi Zheng wrote: > The commit e5251fd43007 ("mm/hugetlb: introduce set_huge_swap_pte_at() > helper") add set_huge_swap_pte_at() to handle swap entries on > architectures that support hugepages consisting of contiguous ptes. > And currently the set_huge_swap_pte_at() is only overridden by arm64. Bleh. I hate the way we handle these currently. > +static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry) > +{ > + VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry)); > + > + return page_folio(pfn_to_page(swp_offset(entry))); > +} We haven't needed a pfn_to_folio() yet, but perhaps we should have one? Related, how should we store migration entries for multi-order folios in the page tables? We can either encode the individual page in question, or we can encode the folio. Do we need to support folios being mapped askew (ie unaligned), or will folios always be mapped aligned? > + if (!pte_present(pte)) { > + struct folio *folio; > + > + folio = hugetlb_swap_entry_to_folio(pte_to_swp_entry(pte)); > + ncontig = num_contig_ptes(folio_size(folio), &pgsize); > + > + for (i = 0; i < ncontig; i++, ptep++) > + set_pte_at(mm, addr, ptep, pte); > + return; > + } It seems like a shame to calculate folio_size() only to turn it into a number of pages. Don't you want to just use: ncontig = folio_nr_pages(folio);