Le 17/01/2019 à 16:51, zhong jiang a écrit :
On 2019/1/16 19:41, Vinayak Menon wrote:
On 1/15/2019 1:54 PM, Laurent Dufour wrote:
Le 14/01/2019 à 14:19, Vinayak Menon a écrit :
On 1/11/2019 9:13 PM, Vinayak Menon wrote:
Hi Laurent,
We are observing an issue with speculative page fault with the following test code on ARM64 (4.14 kernel, 8 cores).
With the patch below, we don't hit the issue.
From: Vinayak Menon <vinmenon@xxxxxxxxxxxxxx>
Date: Mon, 14 Jan 2019 16:06:34 +0530
Subject: [PATCH] mm: flush stale tlb entries on speculative write fault
It is observed that the following scenario results in
threads A and B of process 1 blocking on pthread_mutex_lock
forever after few iterations.
CPU 1 CPU 2 CPU 3
Process 1, Process 1, Process 1,
Thread A Thread B Thread C
while (1) { while (1) { while(1) {
pthread_mutex_lock(l) pthread_mutex_lock(l) fork
pthread_mutex_unlock(l) pthread_mutex_unlock(l) }
} }
When from thread C, copy_one_pte write-protects the parent pte
(of lock l), stale tlb entries can exist with write permissions
on one of the CPUs at least. This can create a problem if one
of the threads A or B hits the write fault. Though dup_mmap calls
flush_tlb_mm after copy_page_range, since speculative page fault
does not take mmap_sem it can proceed further fixing a fault soon
after CPU 3 does ptep_set_wrprotect. But the CPU with stale tlb
entry can still modify old_page even after it is copied to
new_page by wp_page_copy, thus causing a corruption.
Nice catch and thanks for your investigation!
There is a real synchronization issue here between copy_page_range() and the speculative page fault handler. I didn't get it on PowerVM since the TLB are flushed when arch_exit_lazy_mode() is called in copy_page_range() but now, I can get it when running on x86_64.
Signed-off-by: Vinayak Menon <vinmenon@xxxxxxxxxxxxxx>
---
mm/memory.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index 52080e4..1ea168ff 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4507,6 +4507,13 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
return VM_FAULT_RETRY;
}
+ /*
+ * Discard tlb entries created before ptep_set_wrprotect
+ * in copy_one_pte
+ */
+ if (flags & FAULT_FLAG_WRITE && !pte_write(vmf.orig_pte))
+ flush_tlb_page(vmf.vma, address);
+
mem_cgroup_oom_enable();
ret = handle_pte_fault(&vmf);
mem_cgroup_oom_disable();
Your patch is fixing the race but I'm wondering about the cost of these tlb flushes. Here we are flushing on a per page basis (architecture like x86_64 are smarter and flush more pages) but there is a request to flush a range of tlb entries each time a cow page is newly touched. I think there could be some bad impact here.
Another option would be to flush the range in copy_pte_range() before unlocking the page table lock. This will flush entries flush_tlb_mm() would later handle in dup_mmap() but that will be called once per fork per cow VMA.
But wouldn't this cause an unnecessary impact if most of the COW pages remain untouched (which I assume would be the usual case) and thus do not create a fault ?
I tried the attached patch which seems to fix the issue on x86_64. Could you please give it a try on arm64 ?
Your patch works fine on arm64 with a minor change. Thanks Laurent.
Hi, Vinayak and Laurent
I think the below change will impact the performance significantly. Becuase most of process has many
vmas with cow flags. Flush the tlb in advance is not the better way to avoid the issue and it will
call the flush_tlb_mm later.
I think we can try the following way to do.
vm_write_begin(vma)
copy_pte_range
vm_write_end(vma)
The speculative page fault will return to grap the mmap_sem to run the nromal path.
Any thought?
Here is a new version of the patch fixing this issue. There is no
additional TLB flush, all the fix is belonging on vm_write_{begin,end}
calls.
I did some test on x86_64 and PowerPC but that needs to be double check
on arm64.
Vinayak, Zhong, could you please give it a try ?
Thanks,
Laurent.
From 3be977febb9ff93d516a2d222cca4b5a52472a9f Mon Sep 17 00:00:00 2001
From: Laurent Dufour <ldufour@xxxxxxxxxxxxx>
Date: Fri, 18 Jan 2019 16:19:08 +0100
Subject: [PATCH] mm: protect against PTE changes done by dup_mmap()
Vinayak Menon and Ganesh Mahendran reported that the following scenario may
lead to thread being blocked due to data corruption:
CPU 1 CPU 2 CPU 3
Process 1, Process 1, Process 1,
Thread A Thread B Thread C
while (1) { while (1) { while(1) {
pthread_mutex_lock(l) pthread_mutex_lock(l) fork
pthread_mutex_unlock(l) pthread_mutex_unlock(l) }
} }
In the details this happens because :
CPU 1 CPU 2 CPU 3
fork()
copy_pte_range()
set PTE rdonly
got to next VMA...
. PTE is seen rdonly PTE still writable
. thread is writing to page
. -> page fault
. copy the page Thread writes to page
. . -> no page fault
. update the PTE
. flush TLB for that PTE
flush TLB PTE are now rdonly
So the write done by the CPU 3 is interfering with the page copy operation
done by CPU 2, leading to the data corruption.
To avoid this we mark all the VMA involved in the COW mechanism as changing
by calling vm_write_begin(). This ensures that the speculative page fault
handler will not try to handle a fault on these pages.
The marker is set until the TLB is flushed, ensuring that all the CPUs will
now see the PTE as not writable.
Once the TLB is flush, the marker is removed by calling vm_write_end().
The variable last is used to keep tracked of the latest VMA marked to
handle the error path where part of the VMA may have been marked.
Reported-by: Ganesh Mahendran <opensource.ganesh@xxxxxxxxx>
Reported-by: Vinayak Menon <vinmenon@xxxxxxxxxxxxxx>
Signed-off-by: Laurent Dufour <ldufour@xxxxxxxxxxxxx>
---
kernel/fork.c | 30 ++++++++++++++++++++++++++++--
1 file changed, 28 insertions(+), 2 deletions(-)
diff --git a/kernel/fork.c b/kernel/fork.c
index f1258c2ade09..39854b97d06a 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -395,7 +395,7 @@ EXPORT_SYMBOL(free_task);
static __latent_entropy int dup_mmap(struct mm_struct *mm,
struct mm_struct *oldmm)
{
- struct vm_area_struct *mpnt, *tmp, *prev, **pprev;
+ struct vm_area_struct *mpnt, *tmp, *prev, **pprev, *last = NULL;
struct rb_node **rb_link, *rb_parent;
int retval;
unsigned long charge;
@@ -515,8 +515,18 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
rb_parent = &tmp->vm_rb;
mm->map_count++;
- if (!(tmp->vm_flags & VM_WIPEONFORK))
+ if (!(tmp->vm_flags & VM_WIPEONFORK)) {
+ if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT)) {
+ /*
+ * Mark this VMA as changing to prevent the
+ * speculative page fault hanlder to process
+ * it until the TLB are flushed below.
+ */
+ last = mpnt;
+ vm_write_begin(mpnt);
+ }
retval = copy_page_range(mm, oldmm, mpnt);
+ }
if (tmp->vm_ops && tmp->vm_ops->open)
tmp->vm_ops->open(tmp);
@@ -530,6 +540,22 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
out:
up_write(&mm->mmap_sem);
flush_tlb_mm(oldmm);
+
+ if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT)) {
+ /*
+ * Since the TLB has been flush, we can safely unmark the
+ * copied VMAs and allows the speculative page fault handler to
+ * process them again.
+ * Walk back the VMA list from the last marked VMA.
+ */
+ for (; last; last = last->vm_prev) {
+ if (last->vm_flags & VM_DONTCOPY)
+ continue;
+ if (!(last->vm_flags & VM_WIPEONFORK))
+ vm_write_end(last);
+ }
+ }
+
up_write(&oldmm->mmap_sem);
dup_userfaultfd_complete(&uf);
fail_uprobe_end:
--
2.20.1