Hello, On Wed, Mar 27, 2024 at 01:14:49PM -0400, Waiman Long wrote: ... > > @@ -2718,11 +2739,6 @@ static void cpuset_migrate_mm(struct mm_struct *mm, const nodemask_t *from, > > } > > } > > -static void cpuset_post_attach(void) > > -{ > > - flush_workqueue(cpuset_migrate_mm_wq); > > -} > > - > > /* > > * cpuset_change_task_nodemask - change task's mems_allowed and mempolicy > > * @tsk: the task to change > > @@ -3276,6 +3292,10 @@ static int cpuset_can_attach(struct cgroup_taskset *tset) > > bool cpus_updated, mems_updated; > > int ret; > > + ret = schedule_flush_migrate_mm(); > > + if (ret) > > + return ret; > > + > > It may be too early to initiate the task_work at cpuset_can_attach() as no > mm migration may happen. My suggestion is to do it at cpuset_attach() with > at least one cpuset_migrate_mm() call. Yeah, we can do that too. The downside is that we lose the ability to return -ENOMEM unless we separate out allocation and queueing. Given that flush_workqueue() when migration is not in progress is really cheap and the existing code always flushes from post_attach(), I don't think it's too bad but yeah it widens the scope of unnecessary waits. So, yeah, what you're suggesting sounds good too especially given that migration is best effort anyway and already depends on memory allocation. Let's see whether this works for Chuyi and I'll post an update version later. Thanks. -- tejun