On Wed, Aug 20, 2014 at 05:46:22PM -0400, Peter Feiner wrote: > In readable+writable+shared VMAs, PTEs created for read faults have > their write bit set. If the read fault happens after VM_SOFTDIRTY is > cleared, then the PTE's softdirty bit will remain clear after > subsequent writes. > > Here's a simple code snippet to demonstrate the bug: > > char* m = mmap(NULL, getpagesize(), PROT_READ | PROT_WRITE, > MAP_ANONYMOUS | MAP_SHARED, -1, 0); > system("echo 4 > /proc/$PPID/clear_refs"); /* clear VM_SOFTDIRTY */ > assert(*m == '\0'); /* new PTE allows write access */ > assert(!soft_dirty(x)); > *m = 'x'; /* should dirty the page */ > assert(soft_dirty(x)); /* fails */ > > With this patch, new PTEs created for read faults are write protected > if the VMA has VM_SOFTDIRTY clear. > > Signed-off-by: Peter Feiner <pfeiner@xxxxxxxxxx> > --- > mm/memory.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/mm/memory.c b/mm/memory.c > index ab3537b..282a959 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2755,6 +2755,8 @@ void do_set_pte(struct vm_area_struct *vma, unsigned long address, > entry = maybe_mkwrite(pte_mkdirty(entry), vma); > else if (pte_file(*pte) && pte_file_soft_dirty(*pte)) > entry = pte_mksoft_dirty(entry); > + else if (!(vma->vm_flags & VM_SOFTDIRTY)) > + entry = pte_wrprotect(entry); It basically means VM_SOFTDIRTY require writenotify on the vma. What about patch below? Untested. And it seems it'll introduce bug similar to bug fixed by c9d0bf241451, *but* IIUC we have it already in mprotect() code path. I'll look more careful tomorrow. Not-signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index dfc791c42d64..67d509a15969 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -851,8 +851,9 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, if (type == CLEAR_REFS_MAPPED && !vma->vm_file) continue; if (type == CLEAR_REFS_SOFT_DIRTY) { - if (vma->vm_flags & VM_SOFTDIRTY) - vma->vm_flags &= ~VM_SOFTDIRTY; + vma->vm_flags &= ~VM_SOFTDIRTY; + vma->vm_page_prot = vm_get_page_prot( + vma->vm_flags & ~VM_SHARED); } walk_page_range(vma->vm_start, vma->vm_end, &clear_refs_walk); -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>