On 9/11/24 14:57, David Hildenbrand wrote:
On 11.09.24 08:55, Dev Jain wrote:
In preparation for the second patch, abstract away the THP allocation
logic present in the create_huge_pmd() path, which corresponds to the
faulting case when no page is present.
There should be no functional change as a result of applying
this patch.
[...]
+
+static void map_pmd_thp(struct folio *folio, struct vm_fault *vmf,
+ struct vm_area_struct *vma, unsigned long haddr)
+{
+ pmd_t entry;
+
+ entry = mk_huge_pmd(&folio->page, vma->vm_page_prot);
+ entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+ folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
+ folio_add_lru_vma(folio, vma);
+ set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
+ update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
It's quite weird to see a mixture of haddr and vmf->address, and
likely this mixture is wrong or not not required.
Looking at arc's update_mmu_cache_pmd() implementation, I cannot see
how passing in the unaligned address would do the right thing. But
maybe arc also doesn't trigger that code path ... who knows :)
If I am reading correctly, arch/arc/mm/tlb.c: update_mmu_cache_pmd()
calls update_mmu_cache_range() which is already expecting an unaligned
address? But...
Staring at some other update_mmu_cache_pmd() users, it's quite
inconsistent. Primarily only do_huge_pmd_numa_page() and
__do_huge_pmd_anonymous_page() use the unaligned address. The others
seem to use the aligned address ... as one would expect when modifying
a PMD.
Looking at riscv:
update_mmu_cache_pmd()->update_mmu_cache()->update_mmu_cache_range().
The argument getting passed to local_flush_tlb_page() seems like, should
expect an aligned address.
I suggest to change this function to *not* pass in the vmf, and rename
it to something like:
static void folio_map_anon_pmd(struct folio *folio, struct
vm_area_struct *vma, pmd_t *pmd, unsigned long haddr)
Then use haddr also to do the update_mmu_cache_pmd().
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ mm_inc_nr_ptes(vma->vm_mm);
+}
+
+static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
+{
+ struct vm_area_struct *vma = vmf->vma;
+ struct folio *folio;
+ pgtable_t pgtable;
+ unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
+ vm_fault_t ret = 0;
+ gfp_t gfp = vma_thp_gfp_mask(vma);
Nit: While at it, try to use reverse christmas-tree where possible,
makes things more reasible. You could make haddr const.
struct vm_area_struct *vma = vmf->vma;
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
gfp_t gfp = vma_thp_gfp_mask(vma);
struct folio *folio;
vm_fault_t ret = 0;
...
+
+ folio = pmd_thp_fault_alloc(gfp, vma, haddr, vmf->address);
+ if (unlikely(!folio)) {
+ ret = VM_FAULT_FALLBACK;
+ goto release;
+ }
+
+ pgtable = pte_alloc_one(vma->vm_mm);
+ if (unlikely(!pgtable)) {
+ ret = VM_FAULT_OOM;
+ goto release;
+ }
vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
+
Nit Unrelated change.
if (unlikely(!pmd_none(*vmf->pmd))) {
goto unlock_release;