hi,
On 22.03.22 14:02, Xiaoguang Wang wrote:
hi,
On 18.03.22 10:55, Xiaoguang Wang wrote:
Module target_core_user will use it to implement zero copy feature.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@xxxxxxxxxxxxxxxxx>
---
mm/memory.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/memory.c b/mm/memory.c
index 1f745e4d11c2..9974d0406dad 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1664,6 +1664,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start,
mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
}
+EXPORT_SYMBOL_GPL(zap_page_range);
/**
* zap_page_range_single - remove user pages in a given range
To which VMAs will you be applying zap_page_range? I assume only to some
special ones where you previously vm_insert_page(s)_mkspecial'ed pages,
not to some otherwise random VMAs, correct?
Yes, you're right :)
I'd suggest exposing a dedicated function that performs sanity checks on
the vma (VM_PFNMAP ?) and only zaps within a single VMA.
Essentially zap_page_range_single(), excluding "struct zap_details
*details" and including sanity checks.
Reason is that we don't want anybody to blindly zap_page_range() within
random VMAs from a kernel module.
OK, I see, thanks. Xu Yu and I will try to implement a new helper in
following new version patches.
Regards,
Xiaoguang Wang