On Fri, Oct 18, 2024 at 03:42:13PM +0200, Michal Hocko wrote: > On Fri 18-10-24 08:31:22, Johannes Weiner wrote: > > On Fri, Oct 18, 2024 at 12:12:00PM +0200, Michal Hocko wrote: > > > On Thu 17-10-24 09:04:37, Joshua Hahn wrote: > > > > HugeTLB usage is a metric that can provide utility for monitors hoping > > > > to get more insight into the memory usage patterns in cgroups. It also > > > > helps identify if large folios are being distributed efficiently across > > > > workloads, so that tasks that can take most advantage of reduced TLB > > > > misses are prioritized. > > > > > > > > While cgroupv2's hugeTLB controller does report this value, some users > > > > who wish to track hugeTLB usage might not want to take on the additional > > > > overhead or the features of the controller just to use the metric. > > > > This patch introduces hugeTLB usage in the memcg stats, mirroring the > > > > value in the hugeTLB controller and offering a more fine-grained > > > > cgroup-level breakdown of the value in /proc/meminfo. > > > > > > This seems really confusing because memcg controller is not responsible > > > for the hugetlb memory. Could you be more specific why enabling hugetlb > > > controller is not really desirable when the actual per-group tracking is > > > needed? > > > > We have competition over memory, but not specifically over hugetlb. > > > > The maximum hugetlb footprint of jobs is known in advance, and we > > configure hugetlb_cma= accordingly. There are no static boot time > > hugetlb reservations, and there is no opportunistic use of hugetlb > > from jobs or other parts of the system. So we don't need control over > > the hugetlb pool, and no limit enforcement on hugetlb specifically. > > > > However, memory overall is overcommitted between job and system > > management. If the main job is using hugetlb, we need that to show up > > in memory.current and be taken into account for memory.high and > > memory.low enforcement. It's the old memory fungibility argument: if > > you use hugetlb, it should reduce the budget for cache/anon. > > > > Nhat's recent patch to charge hugetlb to memcg accomplishes that. > > > > However, we now have potentially a sizable portion of memory in > > memory.current that doesn't show up in memory.stat. Joshua's patch > > addresses that, so userspace can understand its memory footprint. > > > > I hope that makes sense. > > Looking at 8cba9576df60 ("hugetlb: memcg: account hugetlb-backed memory > in memory controller") describes this limitation > > * Hugetlb pages utilized while this option is not selected will not > be tracked by the memory controller (even if cgroup v2 is remounted > later on). > > and it would be great to have an explanation why the lack of tracking > has proven problematic. Yes, I agree it would be good to outline this in the changelog. The argument being that memory.stat breaks down the consumers that are charged to memory.current. hugetlb is (can be) charged, but is not broken out. This is a significant gap in the memcg stats picture. > Also the above doesn't really explain why those who care cannot > really enabled hugetlb controller to gain the consumption > information. Well, I have explained why we don't need it at least. Enabling almost a thousand lines of basically abandoned code, compared to the few lines in this patch, doesn't strike me as reasonable. That said, I don't think the hugetlb controller is relevant. With hugetlb being part of memory.current (for arguments that are already settled), it needs to be itemized in memory.stat. It's a gap in the memory controller in any case. > Also what happens if CGRP_ROOT_MEMORY_HUGETLB_ACCOUNTING is disabled. > Should we report potentially misleading data? Good point. The stat item tracking should follow the same rules as charging, such that memory.current and memory.stat are always in sync. A stat helper that mirrors the mem_cgroup_hugetlb_try_charge() checks would make sense to me. E.g. lruvec_stat_mod_hugetlb_folio().