On Fri, Oct 29, 2021 at 6:03 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > > On Wed 27-10-21 09:08:21, Suren Baghdasaryan wrote: > > On Fri, Oct 22, 2021 at 10:38 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote: > > > > > > On Fri, Oct 22, 2021 at 1:03 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > > > > > > > > On Thu 21-10-21 18:46:58, Suren Baghdasaryan wrote: > > > > > Race between process_mrelease and exit_mmap, where free_pgtables is > > > > > called while __oom_reap_task_mm is in progress, leads to kernel crash > > > > > during pte_offset_map_lock call. oom-reaper avoids this race by setting > > > > > MMF_OOM_VICTIM flag and causing exit_mmap to take and release > > > > > mmap_write_lock, blocking it until oom-reaper releases mmap_read_lock. > > > > > Reusing MMF_OOM_VICTIM for process_mrelease would be the simplest way to > > > > > fix this race, however that would be considered a hack. Fix this race > > > > > by elevating mm->mm_users and preventing exit_mmap from executing until > > > > > process_mrelease is finished. Patch slightly refactors the code to adapt > > > > > for a possible mmget_not_zero failure. > > > > > This fix has considerable negative impact on process_mrelease performance > > > > > and will likely need later optimization. > > > > > > > > I am not sure there is any promise that process_mrelease will run in > > > > parallel with the exiting process. In fact the primary purpose of this > > > > syscall is to provide a reliable way to oom kill from user space. If you > > > > want to optimize process exit resp. its exit_mmap part then you should > > > > be using other means. So I would be careful calling this a regression. > > > > > > > > I do agree that taking the reference count is the right approach here. I > > > > was wrong previously [1] when saying that pinning the mm struct is > > > > sufficient. I have completely forgot about the subtle sync in exit_mmap. > > > > One way we can approach that would be to take exclusive mmap_sem > > > > throughout the exit_mmap unconditionally. > > > > > > I agree, that would probably be the cleanest way. > > > > > > > There was a push back against > > > > that though so arguments would have to be re-evaluated. > > > > > > I'll review that discussion to better understand the reasons for the > > > push back. Thanks for the link. > > > > Adding Kirill and Andrea. > > > > I had some time to dig some more. The latency increase is definitely > > coming due to process_mrelease calling the last mmput and exit_aio is > > especially problematic. So, currently process_mrelease not only > > releases memory but does more, including waiting for io to finish. > > Well, I still do not see why that is a problem. This syscall is meant to > release the address space not to do it fast. It's the same problem for a userspace memory reaper as for the oom-reaper. The goal is to release the memory of the victim and to quickly move on to the next one if needed. > > > Unconditional mmap_write_lock around free_pgtables in exit_mmap seems > > to me the most semantically correct way forward and the pushback is on > > the basis of regressing performance of the exit path. I would like to > > measure that regression to confirm this. I don't have access to a big > > machine but will ask someone in another Google team to try the test > > Michal wrote here > > https://lore.kernel.org/all/20170725142626.GJ26723@xxxxxxxxxxxxxx/ on > > a server with and without a custom patch. > > Well, I do not remember all the details of the discussion but I believe > a rather large part of that discussion was a bit misled. The exist > path - and the last mmput in particular - shouldn't trigger mmap_sem > contention. There are only rare cases where somebody can race and take a > lock then (e.g. proc interfaces taking the lock before mmget_notzero). > Certainly not something to optimize for and I believe a correct and > robust code should have a preference. As we can see a lack of proper > synchronization has led to 2 very similar problem nobody revealed during > review because the code is just too tricky. I totally agree that this locking is tricky and mmap_sem contention should be very rare in the exit_mmap path and not worth optimizing. > > Btw. the above code will not really tell you much on a larger machine > unless you manage to trigger mmap_sem contection. Otherwise you are > measuring the mmap_sem writelock fast path and that should be really > within a noise comparing to the whole address space destruction time. If > that is not the case then we have a real problem with the locking... My understanding of that discussion is that the concern was that even taking uncontended mmap_sem writelock would regress the exit path. That was what I wanted to confirm. Am I misreading it? Thanks, Suren. > -- > Michal Hocko > SUSE Labs