(sorry, I forgot to turn off HTML formatting) Thank you, I can try this on ToT, although I think that the problem is not with the OOM killer itself but earlier---i.e. invoking the OOM killer seems unnecessary and wrong. Here's the question. The general strategy for page allocation seems to be (please correct me as needed): 1. look in the free lists 2. if that did not succeed, try to reclaim, then try again to allocate 3. keep trying as long as progress is made (i.e. something was reclaimed) 4. if no progress was made and no pages were found, invoke the OOM killer. I'd like to know if that "progress is made" notion is possibly buggy. Specifically, does it mean "progress is made by this task"? Is it possible that resource contention creates a situation where most tasks in most cases can reclaim and allocate, but one task randomly fails to make progress? On Tue, Jun 27, 2017 at 8:21 AM, Luigi Semenzato <semenzato@xxxxxxxxxx> wrote: > (copying Minchan because I just asked him the same question.) > > Thank you, I can try this on ToT, although I think that the problem is not > with the OOM killer itself but earlier---i.e. invoking the OOM killer seems > unnecessary and wrong. Here's the question. > > The general strategy for page allocation seems to be (please correct me as > needed): > > 1. look in the free lists > 2. if that did not succeed, try to reclaim, then try again to allocate > 3. keep trying as long as progress is made (i.e. something was reclaimed) > 4. if no progress was made and no pages were found, invoke the OOM killer. > > I'd like to know if that "progress is made" notion is possibly buggy. > Specifically, does it mean "progress is made by this task"? Is it possible > that resource contention creates a situation where most tasks in most cases > can reclaim and allocate, but one task randomly fails to make progress? > > > On Tue, Jun 27, 2017 at 12:11 AM, Michal Hocko <mhocko@xxxxxxxxxx> wrote: >> >> On Fri 23-06-17 16:29:39, Luigi Semenzato wrote: >> > It is fairly easy to trigger OOM-kills with almost empty swap, by >> > running several fast-allocating processes in parallel. I can >> > reproduce this on many 3.x kernels (I think I tried also on 4.4 but am >> > not sure). I am hoping this is a known problem. >> >> The oom detection code has been reworked considerably in 4.7 so I would >> like to see whether your problem is still presenet with more up-to-date >> kernels. Also an OOM report is really necessary to get any clue what >> might have been going on. >> >> -- >> Michal Hocko >> SUSE Labs > > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>