On Wed, May 04, 2016 at 10:12:43AM +0200, Vlastimil Babka wrote: > On 05/04/2016 07:45 AM, Joonsoo Kim wrote: > >I still don't agree with some part of this patchset that deal with > >!costly order. As you know, there was two regression reports from Hugh > >and Aaron and you fixed them by ensuring to trigger compaction. I > >think that these show the problem of this patchset. Previous kernel > >doesn't need to ensure to trigger compaction and just works fine in > >any case. > > IIRC previous kernel somehow subtly never OOM'd for !costly orders. IIRC, it would not OOM in thrashing case. But, it could OOM in other cases. > So anything that introduces the possibility of OOM may look like > regression for some corner case workloads. But I don't think that > it's OK to not OOM for e.g. kernel stack allocations? Sorry. Double negation makes me hard to understand since I'm not native. So, you think that it's OK to OOM for kernel stack allocation? I think so, too. But, I want not to OOM prematurely. > >Your series make compaction necessary for all. OOM handling > >is essential part in MM but compaction isn't. OOM handling should not > >depend on compaction. I tested my own benchmark without > >CONFIG_COMPACTION and found that premature OOM happens. > > > >I hope that you try to test something without CONFIG_COMPACTION. > > Hmm a valid point, !CONFIG_COMPACTION should be considered. But > reclaim cannot guarantee forming an order>0 page. But neither does > OOM. So would you suggest we keep reclaiming without OOM as before, > to prevent these regressions? Or where to draw the line here? I suggested that memorizing number of reclaimable pages when entering allocation slowpath and try to reclaim at least that amount. Thrashing is effectively prevented in this algorithm and we don't trigger OOM prematurely. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>