On Thu, Mar 20, 2025 at 04:02:27PM +1100, Balbir Singh wrote: > On 3/19/25 17:19, Shakeel Butt wrote: > > A bit late but let me still propose a session on topics related to memory > > cgroups. Last year at LSFMM 2024, we discussed [1] about the potential > > deprecation of memcg v1. Since then we have made very good progress in that > > regard. We have moved the v1-only code in a separate file and make it not > > compile by default, have added warnings in many v1-only interfaces and have > > removed a lot of v1-only code. This year, I want to focus on performance of > > memory cgroup, particularly improving cost of charging and stats. > > I'd be very interested in the discussion, I am not there in person, FYI > > > > > At the high level we can partition the memory charging in three cases. First > > is the user memory (anon & file), second if kernel memory (slub mostly) and > > third is network memory. For network memory, [1] has described some of the > > challenges. Similarly for kernel memory, we had to revert patches where memcg > > charging was too expensive [3,4]. > > > > I want to discuss and brainstorm different ways to further optimize the > > memcg charging for all these types of memory. I am at the moment prototying > > multi-memcg support for per-cpu memcg stocks and would like to see what else > > we can do. > > > > What do you mean by multi-memcg support? Does it means creating those buckets > per cpu? > Multiple cached memcgs in struct memcg_stock_pcp. In [1] I prototypes a network specific per-cpu multi-memcg stock. However I think we need a general support instead of just for networking.