On Sat 24-10-15 03:21:09, Tejun Heo wrote: > Hello, > > On Fri, Oct 23, 2015 at 01:11:45PM +0200, Michal Hocko wrote: > > > The problem here is not lack > > > of execution resource but concurrency management misunderstanding the > > > situation. > > > > And this sounds like a bug to me. > > I don't know. I can be argued either way, the other direction being a > kernel thread going RUNNING non-stop is buggy. Given how this has > been a complete non-issue for all the years, I'm not sure how useful > plugging this is. Well, I guess we haven't noticed because this is a pathological case. It also triggers OOM livelocks which were not reported in the past either. You do not reach this state normally unless you rely _want_ to kill your machine And vmstat is not the only instance. E.g. sysrq oom trigger is known to stay behind in similar cases. It should be changed to a dedicated WQ_MEM_RECLAIM wq and it would require runnable item guarantee as well. > > Don't we have some IO related paths which would suffer from the same > > problem. I haven't checked all the WQ_MEM_RECLAIM users but from the > > name I would expect they _do_ participate in the reclaim and so they > > should be able to make a progress. Now if your new IMMEDIATE flag will > > Seriously, nobody goes full-on RUNNING. Looping with cond_resched seems like general pattern in the kernel when there is no clear source to wait for. We have io_schedule when we know we should wait for IO (in case of congestion) but this is not necessarily the case - as you can see here. What should we wait for? A short nap without actually waiting on anything sounds like a dirty workaround to me. > > guarantee that then I would argue that it should be implicit for > > WQ_MEM_RECLAIM otherwise we always risk a similar situation. What would > > be a counter argument for doing that? > > Not serving any actual purpose and degrading execution behavior. I dunno, I am not familiar with WQ internals to see the risks but to me it sounds like WQ_MEM_RECLAIM gives an incorrect impression of safety wrt. memory pressure and as demonstrated it doesn't do that. Even if you consider cond_resched behavior of the page allocator as bug we should be able to handle this gracefully. -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>