On Mon 14-11-22 12:46:53, Michal Hocko wrote: > On Mon 14-11-22 12:44:48, Michal Hocko wrote: > > On Mon 14-11-22 00:41:21, Zhongkun He wrote: > > > Hi Andrew, thanks for your replay. > > > > > > > This sounds a bit suspicious. Please share much more detail about > > > > these races. If we proced with this design then mpol_put_async() > > > > shouild have comments which fully describe the need for the async free. > > > > > > > > How do we *know* that these races are fully prevented with this > > > > approach? How do we know that mpol_put_async() won't free the data > > > > until the race window has fully passed? > > > > > > A mempolicy can be either associated with a process or with a VMA. > > > All vma manipulation is somewhat protected by a down_read on > > > mmap_lock.In process context there is no locking because only > > > the process accesses its own state before. > > > > We shouldn't really rely on mmap_sem for this IMO. There is alloc_lock > > (aka task lock) that makes sure the policy is stable so that caller can > > atomically take a reference and hold on the policy. And we do not do > > that consistently and this should be fixed. E.g. just looking at some > > random places like allowed_mems_nr (relying on get_task_policy) is > > completely lockless and some paths (like fadvise) do not use any of the > > explicit (alloc_lock) or implicit (mmap_lock) locking. That means that > > the task_work based approach cannot really work in this case, right? > > Just to be more explicit. Task work based approach still requires an > additional synchronization among different threads unless I miss > something so this is really fragile synchronization model. Scratch that. I've managed to confuse myself. Multi-threading doesn't play any role as the mempolicy changed by the syscall is per-task_struct so task_work context is indeed mutually exclusive with any in kernel use of the policy. I will need to think about it some more. -- Michal Hocko SUSE Labs