On Thu, Sep 1, 2022 at 12:18 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > > On Wed 31-08-22 15:01:54, Kent Overstreet wrote: > > On Wed, Aug 31, 2022 at 12:47:32PM +0200, Michal Hocko wrote: > > > On Wed 31-08-22 11:19:48, Mel Gorman wrote: > > > > Whatever asking for an explanation as to why equivalent functionality > > > > cannot not be created from ftrace/kprobe/eBPF/whatever is reasonable. > > > > > > Fully agreed and this is especially true for a change this size > > > 77 files changed, 3406 insertions(+), 703 deletions(-) > > > > In the case of memory allocation accounting, you flat cannot do this with ftrace > > - you could maybe do a janky version that isn't fully accurate, much slower, > > more complicated for the developer to understand and debug and more complicated > > for the end user. > > > > But please, I invite anyone who's actually been doing this with ftrace to > > demonstrate otherwise. > > > > Ftrace just isn't the right tool for the job here - we're talking about adding > > per callsite accounting to some of the fastest fast paths in the kernel. > > > > And the size of the changes for memory allocation accounting are much more > > reasonable: > > 33 files changed, 623 insertions(+), 99 deletions(-) > > > > The code tagging library should exist anyways, it's been open coded half a dozen > > times in the kernel already. > > > > And once we've got that, the time stats code is _also_ far simpler than doing it > > with ftrace would be. If anyone here has successfully debugged latency issues > > with ftrace, I'd really like to hear it. Again, for debugging latency issues you > > want something that can always be on, and that's not cheap with ftrace - and > > never mind the hassle of correlating start and end wait trace events, builting > > up histograms, etc. - that's all handled here. > > > > Cheap, simple, easy to use. What more could you want? > > A big ad on a banner. But more seriously. > > This patchset is _huge_ and touching a lot of different areas. It will > be not only hard to review but even harder to maintain longterm. So > it is completely reasonable to ask for potential alternatives with a > smaller code footprint. I am pretty sure you are aware of that workflow. The patchset is huge because it introduces a reusable part (the first 6 patches introducing code tagging) and 6 different applications in very different areas of the kernel. We wanted to present all of them in the RFC to show the variety of cases this mechanism can be reused for. If the code tagging is accepted, each application can be posted separately to the appropriate group of people. Hopefully that makes it easier to review. Those first 6 patches are not that big and are quite isolated IMHO: include/linux/codetag.h | 83 ++++++++++ include/linux/lazy-percpu-counter.h | 67 ++++++++ include/linux/module.h | 1 + kernel/module/internal.h | 1 - kernel/module/main.c | 4 + lib/Kconfig | 3 + lib/Kconfig.debug | 4 + lib/Makefile | 3 + lib/codetag.c | 248 ++++++++++++++++++++++++++++ lib/lazy-percpu-counter.c | 141 ++++++++++++++++ lib/string_helpers.c | 3 +- scripts/kallsyms.c | 13 ++ > > So I find Peter's question completely appropriate while your response to > that not so much! Maybe ftrace is not the right tool for the intented > job. Maybe there are other ways and it would be really great to show > that those have been evaluated and they are not suitable for a), b) and > c) reasons. That's fair. For memory tracking I looked into using kmemleak and page_owner which can't match the required functionality at an overhead acceptable for production and pre-production testing environments. traces + BPF I haven't evaluated myself but heard from other members of my team who tried using that in production environment with poor results. I'll try to get more specific information on that. > > E.g. Oscar has been working on extending page_ext to track number of > allocations for specific calltrace[1]. Is this 1:1 replacement? No! But > it can help in environments where page_ext can be enabled and it is > completely non-intrusive to the MM code. Thanks for pointing out this work. I'll need to review and maybe profile it before making any claims. > > If the page_ext overhead is not desirable/acceptable then I am sure > there are other options. E.g. kprobes/LivePatching framework can hook > into functions and alter their behavior. So why not use that for data > collection? Has this been evaluated at all? I'm not sure how I can hook into say alloc_pages() to find out where it was called from without capturing the call stack (which would introduce an overhead at every allocation). Would love to discuss this or other alternatives if they can be done with low enough overhead. Thanks, Suren. > > And please note that I am not claiming the presented work is approaching > the problem from a wrong direction. It might very well solve multiple > problems in a single go _but_ the long term code maintenance burden > really has to to be carefully evaluated and if we can achieve a > reasonable subset of the functionality with an existing infrastructure > then I would be inclined to sacrifice some portions with a considerably > smaller code footprint. > > [1] http://lkml.kernel.org/r/20220901044249.4624-1-osalvador@xxxxxxx > > -- > Michal Hocko > SUSE Labs