On Wed 11-03-20 12:45:40, David Rientjes wrote: > On Wed, 11 Mar 2020, Michal Hocko wrote: > > > > > > When a process is oom killed as a result of memcg limits and the victim > > > > > is waiting to exit, nothing ends up actually yielding the processor back > > > > > to the victim on UP systems with preemption disabled. Instead, the > > > > > charging process simply loops in memcg reclaim and eventually soft > > > > > lockups. > > > > > > > > > > Memory cgroup out of memory: Killed process 808 (repro) total-vm:41944kB, anon-rss:35344kB, file-rss:504kB, shmem-rss:0kB, UID:0 pgtables:108kB oom_score_adj:0 > > > > > watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [repro:806] > > > > > CPU: 0 PID: 806 Comm: repro Not tainted 5.6.0-rc5+ #136 > > > > > RIP: 0010:shrink_lruvec+0x4e9/0xa40 > > > > > ... > > > > > Call Trace: > > > > > shrink_node+0x40d/0x7d0 > > > > > do_try_to_free_pages+0x13f/0x470 > > > > > try_to_free_mem_cgroup_pages+0x16d/0x230 > > > > > try_charge+0x247/0xac0 > > > > > mem_cgroup_try_charge+0x10a/0x220 > > > > > mem_cgroup_try_charge_delay+0x1e/0x40 > > > > > handle_mm_fault+0xdf2/0x15f0 > > > > > do_user_addr_fault+0x21f/0x420 > > > > > page_fault+0x2f/0x40 > > > > > > > > > > Make sure that something ends up actually yielding the processor back to > > > > > the victim to allow for memory freeing. Most appropriate place appears to > > > > > be shrink_node_memcgs() where the iteration of all decendant memcgs could > > > > > be particularly lengthy. > > > > > > > > There is a cond_resched in shrink_lruvec and another one in > > > > shrink_page_list. Why doesn't any of them hit? Is it because there are > > > > no pages on the LRU list? Because rss data suggests there should be > > > > enough pages to go that path. Or maybe it is shrink_slab path that takes > > > > too long? > > > > > > > > > > I think it can be a number of cases, most notably mem_cgroup_protected() > > > checks which is why the cond_resched() is added above it. Rather than add > > > cond_resched() only for MEMCG_PROT_MIN and for certain MEMCG_PROT_LOW, the > > > cond_resched() is added above the switch clause because the iteration > > > itself may be potentially very lengthy. > > > > Was any of the above the case for your soft lockup case? How have you > > managed to trigger it? As I've said I am not against the patch but I > > would really like to see an actual explanation what happened rather than > > speculations of what might have happened. If for nothing else then for > > the future reference. > > > > Yes, this is how it was triggered in my own testing. > > > If this is really about all the hierarchy being MEMCG_PROT_MIN protected > > and that results in a very expensive and pointless reclaim walk that can > > trigger soft lockup then it should be explicitly mentioned in the > > changelog. > > I think the changelog clearly states that we need to guarantee that a > reclaimer will yield the processor back to allow a victim to exit. This > is where we make the guarantee. If it helps for the specific reason it > triggered in my testing, we could add: > > "For example, mem_cgroup_protected() can prohibit reclaim and thus any > yielding in page reclaim would not address the issue." I would suggest something like the following: " The reclaim path (including the OOM) relies on explicit scheduling points to hand over execution to tasks which could help with the reclaim process. Currently it is mostly shrink_page_list which yields CPU for each reclaimed page. This might be insuficient though in some configurations. E.g. when a memcg OOM path is triggered in a hierarchy which doesn't have any reclaimable memory because of memory reclaim protection (MEMCG_PROT_MIN) then there is possible to trigger a soft lockup during an out of memory situation on non preemptible kernels <PUT YOUR SOFT LOCKUP SPLAT HERE> Fix this by adding a cond_resched up in the reclaim path and make sure there is a yield point regardless of reclaimability of the target hierarchy. " -- Michal Hocko SUSE Labs