On Fri 01-12-23 12:09:55, Johannes Weiner wrote: > On Fri, Dec 01, 2023 at 10:33:01AM +0100, Michal Hocko wrote: > > On Thu 30-11-23 11:56:42, Johannes Weiner wrote: > > [...] > > > So I wouldn't say it's merely a reclaim hint. It controls a very > > > concrete and influential factor in VM decision making. And since the > > > global swappiness is long-established ABI, I don't expect its meaning > > > to change significantly any time soon. > > > > As I've said I am more worried about potential future changes which > > would modify existing, reduce or add more corner cases which would be > > seen as a change of behavior from the user space POV. That means that we > > would have to be really explicit about the fact that the reclaim is free > > to override the swappiness provided by user. So essentially a best > > effort interface without any actual guarantees. That surely makes it > > harder to use. Is it still useable? > > But it's not free to override the setting as it pleases. I wrote a > detailed list of the current exceptions, and why the user wouldn't > have strong expectations of swappiness being respected in those > cases. Having reasonable limitations is not the same as everything > being up for grabs. Well, I was not suggesting that future changes would be intentionally breaking swappiness. But look at the history, we've had times when swappiness was ignored most of the time due to heavy page cache bias. Now it is really hard to assume future reclaim changes but I can easily imagine that IO refault cost to balance file vs. anon lrus would be in future reclaim improvements and extensions. > Again, the swappiness setting is ABI, and people would definitely > complain if we ignored their request in an unexpected situation and > regressed their workloads. > > I'm not against documenting the exceptions and limitations. Not just > for proactive reclaim, but for swappiness in general. But I don't > think it's fair to say that there are NO rules and NO userspace > contract around this parameter (and I'm the one who wrote most of the > balancing code that implements the swappiness control). Right, but the behavior might change considerably between different kernel versions and that is something to be really careful about. One think I would really like to avoid is to provide any guarantee that swappiness X and nr_to_reclaim has an exact anon/file pages reclaimed or this is a regression because $VER-1 behaved that way. There might be very ligitimate reasons to use different heuristics in the memory reclaim. Another option would be drop any heuristics when swappiness is provided for the memory.reclaim interface which would be much more predictable but it would also diverge from the normal reclaim and that is quite bad IMHO. -- Michal Hocko SUSE Labs