On Thu 20-08-20 12:55:56, Oleg Nesterov wrote: > On 08/19, Suren Baghdasaryan wrote: > > > > Since the combination of CLONE_VM and !CLONE_SIGHAND is rarely > > used the additional mutex lock in that path of the clone() syscall should > > not affect its overall performance. Clearing the MMF_PROC_SHARED flag > > (when the last process sharing the mm exits) is left out of this patch to > > keep it simple and because it is believed that this threading model is > > rare. > > vfork() ? Could you be more specific? > > --- a/kernel/fork.c > > +++ b/kernel/fork.c > > @@ -1403,6 +1403,15 @@ static int copy_mm(unsigned long clone_flags, struct task_struct *tsk) > > if (clone_flags & CLONE_VM) { > > mmget(oldmm); > > mm = oldmm; > > + if (!(clone_flags & CLONE_SIGHAND)) { > > I agree with Christian, you need CLONE_THREAD This was my suggestion to Suren, likely because I've misrememberd which clone flag is responsible for the signal delivery. But now, after double checking we do explicitly disallow CLONE_SIGHAND && !CLONE_VM. So CLONE_THREAD is the right thing to check. > > + /* We need to synchronize with __set_oom_adj */ > > + mutex_lock(&oom_adj_lock); > > + set_bit(MMF_PROC_SHARED, &mm->flags); > > + /* Update the values in case they were changed after copy_signal */ > > + tsk->signal->oom_score_adj = current->signal->oom_score_adj; > > + tsk->signal->oom_score_adj_min = current->signal->oom_score_adj_min; > > + mutex_unlock(&oom_adj_lock); > > I don't understand how this can close the race with __set_oom_adj... > > What if __set_oom_adj() is called right after mutex_unlock() ? It will see > MMF_PROC_SHARED, but for_each_process() won't find the new child until > copy_process() does list_add_tail_rcu(&p->tasks, &init_task.tasks) ? Good point. Then we will have to move this thing there. Thanks! -- Michal Hocko SUSE Labs