On 10/23/20 10:19 PM, John Hubbard wrote:
On 10/23/20 5:19 PM, Jason Gunthorpe wrote:
...
diff --git a/mm/memory.c b/mm/memory.c
index c48f8df6e50268..e2f959cce8563d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1171,6 +1171,17 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct
*src_vma)
mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE,
0, src_vma, src_mm, addr, end);
mmu_notifier_invalidate_range_start(&range);
+ /*
+ * This is like a seqcount where the mmap_lock provides
+ * serialization for the write side. However, unlike seqcount
+ * the read side falls back to obtaining the mmap_lock rather
+ * than spinning. For this reason none of the preempt related
+ * machinery in seqcount is desired here.
ooops...actually, that's a counter-argument to using the raw seqlock API. So
maybe that's a dead end, after all. If so, it would still be good to wrap the "acquire" and
"release" parts of this into functions, IMHO. So we'd end up with, effectively,
a lock API anyway.
+ */
+ mmap_assert_write_locked(src_mm);
+ WRITE_ONCE(src_mm->write_protect_seq,
+ src_mm->write_protect_seq + 1);
+ smp_wmb();
Even if you don't take the "use the raw seqlock API" advice, it seems like these
operations could be wrapped up in a function call, yes?
thanks,
--
John Hubbard
NVIDIA