On 30.09.24 07:28, Dev Jain wrote:
In preparation for the second patch, abstract away the THP allocation
logic present in the create_huge_pmd() path, which corresponds to the
faulting case when no page is present.
There should be no functional change as a result of applying this patch,
except that, as David notes at [1], a PMD-aligned address should
be passed to update_mmu_cache_pmd().
[1]: https://lore.kernel.org/all/ddd3fcd2-48b3-4170-bcaa-2fe66e093f43@xxxxxxxxxx/
Acked-by: David Hildenbrand <david@xxxxxxxxxx>
Reviewed-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
Signed-off-by: Dev Jain <dev.jain@xxxxxxx>
---
mm/huge_memory.c | 98 ++++++++++++++++++++++++++++--------------------
1 file changed, 57 insertions(+), 41 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 4e34b7f89daf..e3bcdbc9baa2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1148,47 +1148,81 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
}
EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
-static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
- struct page *page, gfp_t gfp)
+static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
+ unsigned long addr)
Just a nit as I am skimming over this once more:
We try to make any new code / code we touch to use a 2-tab
indentation for the second parameter line.
E.g.,
static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
unsigned long addr)
{
--
Cheers,
David / dhildenb