On 12/09, Lorenzo Stoakes wrote: > > (As discussed on IRC) how about moving up the dup_mmap_sem lock one level, we > can put the mm before the rmap lookup in build_map_info() is able to find it, > which should avoid the whole issue? Not sure I fully understand the problem, but so far I see nothing wrong in this idea. However, > @@ -1692,9 +1690,11 @@ static struct mm_struct *dup_mm(struct task_struct *tsk, > if (!mm_init(mm, tsk, mm->user_ns)) > goto fail_nomem; > > + uprobe_start_dup_mmap(); > err = dup_mmap(mm, oldmm); > if (err) > goto free_pt; > + uprobe_end_dup_mmap(); If try_module_get(mm->binfmt->module)) fails after that, dup_mm() does "goto free_pt;" and in this case ... > @@ -1709,6 +1709,7 @@ static struct mm_struct *dup_mm(struct task_struct *tsk, > mm->binfmt = NULL; > mm_init_owner(mm, NULL); > mmput(mm); > + uprobe_end_dup_mmap(); ... we have the unbalanced uprobe_end_dup_mmap(). Also. Perhaps we can change dup_mmap() to set MMF_XXX before uprobe_end_dup_mmap(), fail_uprobe_end: + if (retval) + set_bit(mm->flags, MMF_XXX); uprobe_end_dup_mmap(); return retval; Then build_map_info() can check this flag. I guess we can reuse some of MMF_OOM_ bits? May be MMF_UNSTABLE... Oleg.