On Mon, Dec 04, 2023 at 04:23:47PM +0100, Michal Hocko wrote: > On Fri 01-12-23 12:09:55, Johannes Weiner wrote: > > On Fri, Dec 01, 2023 at 10:33:01AM +0100, Michal Hocko wrote: > > > On Thu 30-11-23 11:56:42, Johannes Weiner wrote: > > > [...] > > > > So I wouldn't say it's merely a reclaim hint. It controls a very > > > > concrete and influential factor in VM decision making. And since the > > > > global swappiness is long-established ABI, I don't expect its meaning > > > > to change significantly any time soon. > > > > > > As I've said I am more worried about potential future changes which > > > would modify existing, reduce or add more corner cases which would be > > > seen as a change of behavior from the user space POV. That means that we > > > would have to be really explicit about the fact that the reclaim is free > > > to override the swappiness provided by user. So essentially a best > > > effort interface without any actual guarantees. That surely makes it > > > harder to use. Is it still useable? > > > > But it's not free to override the setting as it pleases. I wrote a > > detailed list of the current exceptions, and why the user wouldn't > > have strong expectations of swappiness being respected in those > > cases. Having reasonable limitations is not the same as everything > > being up for grabs. > > Well, I was not suggesting that future changes would be intentionally > breaking swappiness. But look at the history, we've had times when > swappiness was ignored most of the time due to heavy page cache bias. > Now it is really hard to assume future reclaim changes but I can easily > imagine that IO refault cost to balance file vs. anon lrus would be in > future reclaim improvements and extensions. That's a possibility, but it would be an explicit *replacement* of the swappiness setting. Since swappiness is already exposed in more than one way (global, cgroup1), truly overriding it would need to be opt-in. We could only remove the heavy page cache bias without a switch because it didn't actually break user expectations. In fact it followed them more closely, since previously with a swappiness=60 the kernel might not swap at all even with a heavily thrashing cache. The thing Linus keeps stressing is not that we cannot change behavior, but that we cannot break (reasonable) existing setups. So we can make improvements as long as we don't violate general user expectations or generate IO patterns that are completely contradictory to the setting. > > Again, the swappiness setting is ABI, and people would definitely > > complain if we ignored their request in an unexpected situation and > > regressed their workloads. > > > > I'm not against documenting the exceptions and limitations. Not just > > for proactive reclaim, but for swappiness in general. But I don't > > think it's fair to say that there are NO rules and NO userspace > > contract around this parameter (and I'm the one who wrote most of the > > balancing code that implements the swappiness control). > > Right, but the behavior might change considerably between different > kernel versions and that is something to be really careful about. One > think I would really like to avoid is to provide any guarantee that > swappiness X and nr_to_reclaim has an exact anon/file pages reclaimed > or this is a regression because $VER-1 behaved that way. There might be > very ligitimate reasons to use different heuristics in the memory > reclaim. Yes, it shouldn't mean any more than the global swappiness means. > Another option would be drop any heuristics when swappiness is provided > for the memory.reclaim interface which would be much more predictable > but it would also diverge from the normal reclaim and that is quite bad > IMHO. I would prefer to keep the semantics for global/reactive and proactive reclaim the same. Making an existing tunable available in cgroup2 is much lower risk than providing something new and different under the same name.