On Wed, Feb 19, 2020 at 11:28:48AM -0500, Kenny Ho wrote: > On Wed, Feb 19, 2020 at 11:18 AM Johannes Weiner <hannes@xxxxxxxxxxx> wrote: > > > > Yes, I'd go with absolute units when it comes to memory, because it's > > not a renewable resource like CPU and IO, and so we do have cliff > > behavior around the edge where you transition from ok to not-enough. > > > > memory.low is a bit in flux right now, so if anything is unclear > > around its semantics, please feel free to reach out. > > I am not familiar with the discussion, would you point me to a > relevant thread please? Here is a cleanup patch, not yet merged, that documents the exact semantics and behavioral considerations: https://lore.kernel.org/linux-mm/20191213192158.188939-3-hannes@xxxxxxxxxxx/ But the high-level idea is this: you assign each cgroup or cgroup subtree a chunk of the resource that it's guaranteed to be able to consume. It *can* consume beyond that threshold if available, but that overage may get reclaimed again if somebody else needs it instead. This allows you to do a ballpark distribution of the resource between different workloads, while the kernel retains the ability to optimize allocation of spare resources - because in practice, workload demand varies over time, workloads disappear and new ones start up etc. > In addition, is there some kind of order of preference for > implementing low vs high vs max? If you implement only one allocation model, the preference would be on memory.low. Limits are rigid and per definition waste resources, so in practice we're moving away from them.