At 2023-05-17 14:59:06, "Yosry Ahmed" <yosryahmed@xxxxxxxxxx> wrote: >+David Rientjes > >On Tue, May 16, 2023 at 8:20 PM chengkaitao <chengkaitao@xxxxxxxxxxxxxx> wrote: >> >> Establish a new OOM score algorithm, supports the cgroup level OOM >> protection mechanism. When an global/memcg oom event occurs, we treat >> all processes in the cgroup as a whole, and OOM killers need to select >> the process to kill based on the protection quota of the cgroup. >> > >Perhaps this is only slightly relevant, but at Google we do have a >different per-memcg approach to protect from OOM kills, or more >specifically tell the kernel how we would like the OOM killer to >behave. > >We define an interface called memory.oom_score_badness, and we also >allow it to be specified per-process through a procfs interface, >similar to oom_score_adj. > >These scores essentially tell the OOM killer the order in which we >prefer memcgs to be OOM'd, and the order in which we want processes in >the memcg to be OOM'd. By default, all processes and memcgs start with >the same score. Ties are broken based on the rss of the process or the >usage of the memcg (prefer to kill the process/memcg that will free >more memory) -- similar to the current OOM killer. Thank you for providing a new application scenario. You have described a new per-memcg approach, but a simple introduction cannot explain the details of your approach clearly. If you could compare and analyze my patches for possible defects, or if your new approach has advantages that my patches do not have, I would greatly appreciate it. >This has been brought up before in other discussions without much >interest [1], but just thought it may be relevant here. > >[1]https://lore.kernel.org/lkml/CAHS8izN3ej1mqUpnNQ8c-1Bx5EeO7q5NOkh0qrY_4PLqc8rkHA@xxxxxxxxxxxxxx/#t -- Thanks for your comment! chengkaitao