On Thu, Jul 12, 2018 at 01:29:41PM -0400, Johannes Weiner wrote: > +/** > + * cgroup_move_task - move task to a different cgroup > + * @task: the task > + * @to: the target css_set > + * > + * Move task to a new cgroup and safely migrate its associated stall > + * state between the different groups. > + * > + * This function acquires the task's rq lock to lock out concurrent > + * changes to the task's scheduling state and - in case the task is > + * running - concurrent changes to its stall state. > + */ > +void cgroup_move_task(struct task_struct *task, struct css_set *to) > +{ > + unsigned int task_flags = 0; > + struct rq_flags rf; > + struct rq *rq; > + u64 now; > + > + rq = task_rq_lock(task, &rf); > + > + if (task_on_rq_queued(task)) { > + task_flags = TSK_RUNNING; > + } else if (task->in_iowait) { > + task_flags = TSK_IOWAIT; > + } > + if (task->flags & PF_MEMSTALL) > + task_flags |= TSK_MEMSTALL; > + > + if (task_flags) { > + update_rq_clock(rq); > + now = rq_clock(rq); > + psi_task_change(task, now, task_flags, 0); > + } > + > + /* > + * Lame to do this here, but the scheduler cannot be locked > + * from the outside, so we move cgroups from inside sched/. > + */ > + rcu_assign_pointer(task->cgroups, to); > + > + if (task_flags) > + psi_task_change(task, now, 0, task_flags); > + > + task_rq_unlock(rq, task, &rf); > +} Why is that not part of cpu_cgroup_attach() / sched_move_task() ? -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html