Hi Davidlohr, On Sun, Apr 20, 2014 at 10:26 AM, Davidlohr Bueso <davidlohr@xxxxxx> wrote: > Performing vma lookups without taking the mm->mmap_sem is asking > for trouble. While doing the search, the vma in question can be > modified or even removed before returning to the caller. Take the > lock (shared) in order to avoid races while iterating through the > vmacache and/or rbtree. Yes, mm->mmap_sem should lock here. Applied, thanks. > > This patch is completely *untested*. > > Signed-off-by: Davidlohr Bueso <davidlohr@xxxxxx> > Cc: Steven Miao <realmz6@xxxxxxxxx> > Cc: adi-buildroot-devel@xxxxxxxxxxxxxxxxxxxxx > --- > arch/blackfin/kernel/ptrace.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/arch/blackfin/kernel/ptrace.c b/arch/blackfin/kernel/ptrace.c > index e1f88e0..8b8fe67 100644 > --- a/arch/blackfin/kernel/ptrace.c > +++ b/arch/blackfin/kernel/ptrace.c > @@ -117,6 +117,7 @@ put_reg(struct task_struct *task, unsigned long regno, unsigned long data) > int > is_user_addr_valid(struct task_struct *child, unsigned long start, unsigned long len) > { > + bool valid; > struct vm_area_struct *vma; > struct sram_list_struct *sraml; > > @@ -124,9 +125,12 @@ is_user_addr_valid(struct task_struct *child, unsigned long start, unsigned long > if (start + len < start) > return -EIO; > > + down_read(&child->mm->mmap_sem); > vma = find_vma(child->mm, start); > - if (vma && start >= vma->vm_start && start + len <= vma->vm_end) > - return 0; > + valid = vma && start >= vma->vm_start && start + len <= vma->vm_end; > + up_read(&child->mm->mmap_sem); > + if (valid) > + return 0; > > for (sraml = child->mm->context.sram_list; sraml; sraml = sraml->next) > if (start >= (unsigned long)sraml->addr > -- > 1.8.1.4 > -steven -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>