[hmm the email got stuck on my send queue - sending again] On Mon 19-08-19 16:15:08, Yafang Shao wrote: > On Mon, Aug 19, 2019 at 3:31 PM Michal Hocko <mhocko@xxxxxxxx> wrote: > > > > On Sun 18-08-19 00:24:54, Yafang Shao wrote: > > > In the current memory.min design, the system is going to do OOM instead > > > of reclaiming the reclaimable pages protected by memory.min if the > > > system is lack of free memory. While under this condition, the OOM > > > killer may kill the processes in the memcg protected by memory.min. > > > > Could you be more specific about the configuration that leads to this > > situation? > > When I did memory pressure test to verify memory.min I found that issue. > This issue can be produced as bellow, > memcg setting, > memory.max: 1G > memory.min: 512M > some processes are running is this memcg, with both serveral > hundreds MB file mapping and serveral hundreds MB anon mapping. > system setting, > swap: off. > some memory pressure test are running on the system. > > When the memory usage of this memcg is bellow the memory.min, the > global reclaimers stop reclaiming pages in this memcg, and when > there's no available memory, the OOM killer will be invoked. > Unfortunately the OOM killer can chose the process running in the > protected memcg. Well, the memcg protection was designed to prevent from regular memory reclaim. It was not aimed at acting as a group wide oom protection. The global oom killer (but memcg as well) simply cares only about oom_score_adj when selecting a victim. Adding yet another oom protection is likely to complicate the oom selection logic and make it more surprising. E.g. why should workload fitting inside the min limit be so special? Do you have any real world example? > In order to produce it easy, you can incease the memroy.min and set > -1000 to the oom_socre_adj of the processes outside of the protected > memcg. This sounds like a very dubious configuration to me. There is no other option than chosing from the protected group. > Is this setting proper ? > > Thanks > Yafang -- Michal Hocko SUSE Labs