Hello, Michal. On Tue, Oct 27, 2015 at 10:16:03AM +0100, Michal Hocko wrote: > > Seriously, nobody goes full-on RUNNING. > > Looping with cond_resched seems like general pattern in the kernel when > there is no clear source to wait for. We have io_schedule when we know > we should wait for IO (in case of congestion) but this is not necessarily > the case - as you can see here. What should we wait for? A short nap > without actually waiting on anything sounds like a dirty workaround to > me. It's one thing to do cond_resched() in long loops to avoid long priority inversions and another to indefinitely loop without making any difference. > > > guarantee that then I would argue that it should be implicit for > > > WQ_MEM_RECLAIM otherwise we always risk a similar situation. What would > > > be a counter argument for doing that? > > > > Not serving any actual purpose and degrading execution behavior. > > I dunno, I am not familiar with WQ internals to see the risks but to me > it sounds like WQ_MEM_RECLAIM gives an incorrect impression of safety > wrt. memory pressure and as demonstrated it doesn't do that. Even if you It generally does. This is an extremely rare corner case where infinite loop w/o forward progress is introduce w/o the user being outright buggy. > consider cond_resched behavior of the page allocator as bug we should be > able to handle this gracefully. We can argue this back and forth forever but we'll either need to special case it (be it short sleep or a special flag) or implement a rather complex detection logic which will likely involve some level of complexity and is dubious in its practical usefulness. It's a trade-off and given the circumstances adding short sleep looks like a reasonable one to me. If this is more common, we definitely wanna go for automatic detection. Thanks. -- tejun -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>