On 05/19, Michal Hocko wrote: > > Long term I > would like to to move this logic into the mm_struct, it would be just > larger surgery I guess. Why we can't do this right now? Just another MMF_ flag set only once and never cleared. And. I personally like this change "in general", if nothing else I recently blamed this for_each_process_thread() loop. But if we do this, I think we should also shift find_lock_task_mm() into this loop. And this makes me think again we need something like struct task_struct *next_task_with_mm(struct task_struct *p) { struct task_struct *t; p = p->group_leader; while ((p = next_task(p)) != &init_task) { if (p->flags & PF_KTHREAD) continue; t = find_lock_task_mm(p); if (t) return t; } return NULL; } #define for_each_task_lock_mm(p) for (p = &init_task; (p = next_task_with_mm(p)); task_unlock(p)) Or we we can move task_unlock() into next_task_with_mm(), it can check mm != NULL or p != init_task. Oleg. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>