On Tue, Aug 18, 2020 at 12:18:44PM +0200, peterz@xxxxxxxxxxxxx wrote: > What you need is a feeback loop against the rate of freeing pages, and > when you near the saturation point, the allocation rate should exactly > match the freeing rate. IO throttling solves a slightly different problem. IO occurs in parallel to the workload's execution stream, and you're trying to take the workload from dirtying at CPU speed to rate match to the independent IO stream. With memory allocations, though, freeing happens from inside the execution stream of the workload. If you throttle allocations, you're most likely throttling the freeing rate as well. And you'll slow down reclaim scanning by the same amount as the page references, so it's not making reclaim more successful either. The alloc/use/free (im)balance is an inherent property of the workload, regardless of the speed you're executing it at. So the goal here is different. We're not trying to pace the workload into some form of sustainability. Rather, it's for OOM handling. When we detect the workload's alloc/use/free pattern is unsustainable given available memory, we slow it down just enough to allow userspace to implement OOM policy and job priorities (on containerized hosts these tend to be too complex to express in the kernel's oom scoring system). The exponential curve makes it look like we're trying to do some type of feedback system, but it's really only to let minor infractions pass and throttle unsustainable expansion ruthlessly. Drop-behind reclaim can be a bit bumpy because we batch on the allocation side as well as on the reclaim side, hence the fuzz factor there.