On 06.02.23 15:58, Yin, Fengwei wrote:
On 2/6/2023 10:44 PM, Matthew Wilcox wrote:
On Mon, Feb 06, 2023 at 10:06:38PM +0800, Yin Fengwei wrote:
diff --git a/include/linux/mm.h b/include/linux/mm.h
index d6f8f41514cc..93192f04b276 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1162,6 +1162,9 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page);
void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr);
+void do_set_pte_range(struct vm_fault *vmf, struct folio *folio,
+ unsigned long addr, pte_t *pte,
+ unsigned long start, unsigned int nr);
There are only two callers of do_set_pte(), and they're both in mm.
I don't think we should retain do_set_pte() as a wrapper, but rather
change both callers to call 'set_pte_range()'. The 'do' doesn't add
any value, so let's drop that word.
OK.
+ if (!cow) {
+ folio_add_file_rmap_range(folio, start, nr, vma, false);
+ add_mm_counter(vma->vm_mm, mm_counter_file(page), nr);
+ } else {
+ /*
+ * rmap code is not ready to handle COW with anonymous
+ * large folio yet. Capture and warn if large folio
+ * is given.
+ */
+ VM_WARN_ON_FOLIO(folio_test_large(folio), folio);
+ }
The handling of cow pages is still very clunky.
folio_add_new_anon_rmap() handles anonymous large folios just fine. I
think David was looking at current code, not the code in mm-next.
OK. Let's wait for further comment from David.
As I raised, page_add_new_anon_rmap() -> folio_add_new_anon_rmap() can
be used to add a fresh (a) PMD-mapped THP or (b) order-0 folio.
folio_add_new_anon_rmap() is not suitable for PTE-mapping a large folio.
Which is what we are intending to do here unless I am completely off.
PTE-mapping a large folio requires different accounting, different
mapcount handling and different PG_anon_exclusive handling.
Which is all not there yet.
--
Thanks,
David / dhildenb