On Thu 19-05-22 14:33:03, Suren Baghdasaryan wrote: > On Thu, May 19, 2022 at 1:22 PM Liam Howlett <liam.howlett@xxxxxxxxxx> wrote: [...] > > arch_exit_mmap() was called under the write lock before, is it safe to > > call it under the read lock? > > Ah, good catch. I missed at least one call chain which I believe would > require arch_exit_mmap() to be called under write lock: > > arch_exit_mmap > ldt_arch_exit_mmap > free_ldt_pgtables > free_pgd_range Why would be this a problem? This is LDT mapped into page tables but as far as I know oom_reaper cannot really ever see that range because it is not really reachable from any VMA. > I'll need to check whether arch_exit_mmap() has to be called before > unmap_vmas(). If not, we could move it further down when we hold the > write lock. > Andrew, please remove this patchset from your tree for now until I fix this. > > > > > > > > > vma = mm->mmap; > > > if (!vma) { > > > /* Can happen if dup_mmap() received an OOM */ > > > - mmap_write_unlock(mm); > > > + mmap_read_unlock(mm); > > > return; > > > } > > > > > > @@ -3138,6 +3121,16 @@ void exit_mmap(struct mm_struct *mm) > > > /* update_hiwater_rss(mm) here? but nobody should be looking */ > > > /* Use -1 here to ensure all VMAs in the mm are unmapped */ > > > unmap_vmas(&tlb, vma, 0, -1); > > > + mmap_read_unlock(mm); > > > + > > > + /* > > > + * Set MMF_OOM_SKIP to hide this task from the oom killer/reaper > > > + * because the memory has been already freed. Do not bother checking > > > + * mm_is_oom_victim because setting a bit unconditionally is cheaper. > > > + */ > > > + set_bit(MMF_OOM_SKIP, &mm->flags); > > > + > > > + mmap_write_lock(mm); > > > > Is there a race here? We had a VMA but after the read lock was dropped, > > could the oom killer cause the VMA to be invalidated? Nope, the oom killer itself doesn't do much beyond sending SIGKILL and scheduling the victim for the oom_reaper. dup_mmap is holding exclusive mmap_lock throughout the whole process. > > I don't think so > > but the comment above about dup_mmap() receiving an OOM makes me > > question it. The code before kept the write lock from when the VMA was > > found until the end of the mm edits - and it had the check for !vma > > within the block itself. We are also hiding it from the oom killer > > outside the read lock so it is possible for oom to find it in that > > window, right? The oom killer's victim selection doesn't really depend on the mmap_lock. If there is a race and MMF_OOM_SKIP is not set yet then it will consider the task and very likely bail out anyway because the address space has already been unampped so oom_badness() would consider this task boring. oom_reaper on the other hand would just try to unmap in parallel but that is fine regardless of MMF_OOM_SKIP. Seeing the flag would allow to bail out early rather than just trying to unmap something that is no longer there. The only problem for the oom_reaper is to see page tables of the address space disappearing from udner its feet. That is excluded by the the exlusive lock and as Suren mentions mm->mmap == NULL check if the exit_mmap wins the race. -- Michal Hocko SUSE Labs