On Tue, 2 May 2017, Michal Hocko wrote: > I have already asked and my questions were ignored. So let me ask again > and hopefuly not get ignored this time. So Why do we need a different > criterion on anon pages than file pages? The preference in get_scan_count() as already implemented is to reclaim from file pages if there is enough memory on the inactive list to reclaim. That is unchanged with this patch. > I do agree that blindly > scanning anon pages when file pages are low is very suboptimal but this > adds yet another heuristic without _any_ numbers. Why cannot we simply > treat anon and file pages equally? Something like the following > > if (pgdatfile + pgdatanon + pgdatfree > 2*total_high_wmark) { > scan_balance = SCAN_FILE; > if (pgdatfile < pgdatanon) > scan_balance = SCAN_ANON; > goto out; > } > This would be substantially worse than the current code because it thrashes the anon lru when anon out numbers file pages rather than at the point we fall under the high watermarks for all eligible zones. If you tested your suggestion, you could see gigabytes of memory left untouched on the file lru. Anonymous memory is more probable to be part of the working set. > Also it would help to describe the workload which can trigger this > behavior so that we can compare numbers before and after this patch. Any workload that fills system RAM with anonymous memory that cannot be reclaimed will thrash the anon lru without this patch. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>