On Mon, 7 Mar 2022, Johannes Weiner wrote: > > IOW, this would be a /sys/devices/system/node/nodeN/reclaim mechanim for > > each NUMA node N on the system. (It would be similar to the existing > > per-node sysfs "compact" mechanism used to trigger compaction from > > userspace.) > > I generally think a proactive reclaim interface is a good idea. > > A per-cgroup control knob would make more sense to me, as cgroupfs > takes care of delegation, namespacing etc. and so would permit > self-directed proactive reclaim inside containers. > This is an interesting point and something that would need to be decided. There's pros and cons to both approaches, per-cgroup mechanism vs purely a per-node sysfs mechanism that can take a cgroup id. The reason we'd like this in sysfs is because of users who do not enable CONFIG_MEMCG but would still benefit from proactive reclaim. Such users do exist and do not rely on memcg, such as Chrome OS, and from my understanding this is normally done to speed up hibernation. But I note your use of "per-cgroup" control knob and not specifically "per-memcg". Were you considering a proactive reclaim mechanism for a cgroup other than memcg? A new one? I'm wondering if it would make sense for such a cgroup interface, if eventually needed, to be added incrementally on top of a per-node sysfs interface. (We know today that there is a need for proactive reclaim for users who do not use memcg at all.) > > Userspace would write the following to this file: > > - nr_to_reclaim pages > > This makes sense, although (and you hinted at this below), I'm > thinking it should be in bytes, especially if part of cgroupfs. > If we agree upon a sysfs interface I assume there would be no objection to this in nr_to_reclaim pages? I agree if this is to be a memcg knob that it should be expressed in bytes for consistency with other knobs. > > - swappiness factor > > This I'm not sure about. > > Mostly because I'm not sure about swappiness in general. It balances > between anon and file, but both of them are aged according to the same > LRU rules. The only reason to prefer one over the other seems to be > when the cost of reloading one (refault vs swapin) isn't the same as > the other. That's usually a hardware property, which in a perfect > world we'd auto-tune inside the kernel based on observed IO > performance. Not sure why you'd want this per reclaim request. > > > - flags to specify context, if any[**] > > > > [**] this is offered for extensibility to specify the context in which > > reclaim is being done (clean file pages only, demotion for memory > > tiering vs eviction, etc), otherwise 0 > > This one is curious. I don't understand the use cases for either of > these examples, and I can't think of other flags a user may pass on a > per-invocation basis. Would you care to elaborate some? > If we combine the above two concerns, maybe only a flags argument is sufficient where you can specify only anon or only file (and neither means both)? What is controllable by swappiness could be controlled by two different writes to the interface, one for (possibly) anon and one for (possibly) file. There was discussion about treating the two different types of memory differently as a function of reload cost, cost of doing I/O for discard, and how much swap space we want proactive reclaim to take, as well as the only current alternative is to be playing with the global vm.swappiness. Michal asked if this would include slab reclaim or shrinkers, I think the answer is "possibly yes," but no initial use case for this (flags would be extensible to permit the addition of it incrementally). In fact, if you were to pass a cgroup id of 0 to induce global proactive reclaim you could mimic the same control we have with vm.drop_caches today but does not include reclaiming all of a memory type.