On Sat, Nov 9, 2013 at 7:16 AM, Oleg Nesterov <oleg@xxxxxxxxxx> wrote:
On 11/08, Sameer Nanda wrote:
>
> @@ -413,12 +413,20 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
> DEFAULT_RATELIMIT_BURST);
> @@ -456,10 +463,18 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,Well, ack... but with this change find_lock_task_mm() relies on tasklist,
> }
> }
> } while_each_thread(p, t);
> - read_unlock(&tasklist_lock);
>
> rcu_read_lock();
> +
> p = find_lock_task_mm(victim);
> +
> + /*
> + * Since while_each_thread is currently not RCU safe, this unlock of
> + * tasklist_lock may need to be moved further down if any additional
> + * while_each_thread loops get added to this function.
> + */
> + read_unlock(&tasklist_lock);
so it makes sense to move rcu_read_lock() down before for_each_process().
Otherwise this looks confusing, but I won't insist.
Agreed that this looks a bit confusing. I will respin the patch.
Oleg.
Sameer