On Sun, Jun 23, 2024 at 04:52:00PM -0400, Waiman Long wrote: > Correct some email addresses. > > On 6/23/24 16:45, Waiman Long wrote: > > With memory cgroup v1, there is only a single "memory.limit_in_bytes" > > to be set to specify the maximum amount of memory that is allowed to > > be used. So a lot of memory cgroup using tools and applications allow > > users to specify a single memory limit. When they migrate to cgroup > > v2, they use the given memory limit to set memory.max and disregard > > memory.high for the time being. > > > > Without properly setting memory.high, these user space applications > > cannot make use of the memory cgroup v2 ability to further reduce the > > chance of OOM kills by throttling and early memory reclaim. > > > > This patch adds a new sysctl parameter "vm/memory_high_autoset_ratio" > > to enable setting "memory.high" automatically whenever "memory.max" is > > set as long as "memory.high" hasn't been explicitly set before. This > > will allow a system administrator or a middleware layer to greatly > > reduce the chance of memory cgroup OOM kills without worrying about > > how to properly set memory.high. > > > > The new sysctl parameter will allow a range of 0-100. The default value > > of 0 will disable memory.high auto setting. For any non-zero value "n", > > the actual ratio used will be "n/(n+1)". A user cannot set a fraction > > less than 1/2. Hi Waiman, I'm not sure that setting memory.high is always a good idea (it comes with a certain cost, e.g. can increase latency), but even if it is, why systemd or similar userspace tools can't do this? I wonder what's special about your case if you do see a lot of OOMs which can be avoided by setting memory.high? Do you have a bursty workload? Thanks!