On Fri, Oct 28, 2022 at 7:39 AM Johannes Weiner <hannes@xxxxxxxxxxx> wrote: > > On Thu, Oct 27, 2022 at 01:43:24PM -0700, Yosry Ahmed wrote: > > On Thu, Oct 27, 2022 at 7:15 AM Johannes Weiner <hannes@xxxxxxxxxxx> wrote: > > > On Wed, Oct 26, 2022 at 07:41:21PM -0700, Yosry Ahmed wrote: > > > > My 2c, if we care about direct reclaim as in reclaim that may stall > > > > user space application allocations, then there are other reclaim > > > > contexts that may pollute the direct reclaim stats. For instance, > > > > proactive reclaim, or reclaim done by writing a limit lower than the > > > > current usage to memory.max or memory.high, as they are not done in > > > > the context of the application allocating memory. > > > > > > > > At Google, we have some internal direct reclaim memcg statistics, and > > > > the way we handle this is by passing a flag from such contexts to > > > > try_to_free_mem_cgroup_pages() in the reclaim_options arg. This flag > > > > is echod into a scan_struct bit, which we then use to filter out > > > > direct reclaim operations that actually cause latencies in user space > > > > allocations. > > > > > > > > Perhaps something similar might be more generic here? I am not sure > > > > what context khugepaged reclaims memory from, but I think it's not a > > > > memcg context, so maybe we want to generalize the reclaim_options arg > > > > to try_to_free_pages() or whatever interface khugepaged uses to free > > > > memory. > > > > > > So at the /proc/vmstat level, I'm not sure it matters much because it > > > doesn't count any cgroup_reclaim() activity. > > > > > > But at the cgroup level, it sure would be nice to split out proactive > > > reclaim churn. Both in terms of not polluting direct reclaim counts, > > > but also for *knowing* how much proactive reclaim is doing. > > > > > > Do you have separate counters for this? > > > > Not yet. Currently we only have the first part, not polluting direct > > reclaim counts. > > > > We basically exclude reclaim coming from memory.reclaim, setting > > memory.max/memory.limit_in_bytes, memory.high (on write, not hitting > > the high limit), and memory.force_empty from direct reclaim stats. > > > > As for having a separate counter for proactive reclaim, do you think > > it should be limited to reclaim coming from memory.reclaim (and > > potentially memory.force_empty), or should it include reclaim coming > > from limit-setting as well? > > A combined counter seems reasonable to me. We *have* used the limit > knobs to drive proactive reclaim in production in the past, so it's > not a stretch. And I can't think of a scenario where you'd like them > to be separate. > > I could think of two ways of describing it: > > pgscan_user: User-requested reclaim. Could be confusing if we ever > have an in-kernel proactive reclaim driver - unless that would then go > to another counter (new or kswapd). > > pgscan_ext: Reclaim activity from extraordinary/external > requests. External as in: outside the allocation context. I imagine if the kernel is doing proactive reclaim on its own, we might want a separate counter for that anyway to monitor what the kernel is doing. So maybe pgscan_user sounds nice for now, but I also like that the latter explicitly says "this is external to the allocation context". But we can just go with pgscan_user and document it properly. How would khugepaged fit in this story? Seems like it would be part of pgscan_ext but not pgscan_user. I imagine we also don't want to pollute proactive reclaim counters with khugepaged reclaim (or other non-direct reclaim). Maybe pgscan_user and pgscan_kernel/pgscan_indirect for things like khugepaged? The problem with pgscan_kernel/indirect is that if we add a proactive reclaim kthread in the future it would technically fit there but we would want a separate counter for it. I am honestly not sure where to put khugepaged. The reasons I don't like a dedicated counter for khugepaged are: - What if other kthreads like khugepaged start doing the same, do we add one counter per-thread? - What if we deprecate khugepaged (or such threads)? Seems more likely than deprecating kswapd. Looks like we want a stat that would group all of this reclaim coming from non-direct kthreads, but would not include a future proactive reclaim kthread.