On Fri 14-03-25 10:18:33, Johannes Weiner wrote: > On Fri, Mar 14, 2025 at 10:27:57AM +0100, Michal Hocko wrote: [...] > > I have just noticed that you have followed up [1] with a concern that > > using swappiness in the whole min-max range without any heuristics turns > > out to be harder than just relying on the min and max as extremes. > > What seems to be still missing (or maybe it is just me not seeing that) > > is why should we only enforce those extreme ends of the range and still > > preserve under-defined semantic for all other swappiness values in the > > pro-active reclaim. > > I'm guess I'm not seeing the "under-defined" part. What I meant here is that any other value than both ends of swappiness doesn't have generally predictable behavior unless you know specific details of the current memory reclaim heuristics in get_scan_count. > cache_trim_mode is > there to make sure a streaming file access pattern doesn't cause > swapping. Yes, I am aware of the purpose. > He has a special usecase to override cache_trim_mode when he > knows a large amount of anon is going cold. There is no way we can > generally remove it from proactive reclaim. I believe I do understand the requirement here. The patch offers counterpart to noswap pro-active reclaim and I do not have objections to that. The reason I brought this up is that everything in between 0..200 is kinda gray area. We've had several queries why swappiness=N doesn't work as expected and the usual answer was because of heuristics. Most people just learned to live with that and stopped fine tuning vm_swappiness. Which is good I guess. Pro-active reclaim is slightly different in a sense that it gives a much better control on how much to reclaim and since we have addes swappiness extension then even the balancing. So why not make that balancing work for real and always follow the given proportion? To prevent any unintended regressions this would be the case only with swappiness was explicitly given to the reclaim request. Does that make any sense? -- Michal Hocko SUSE Labs