On Wed, Aug 31, 2022 at 1:56 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote: > > On Wed, Aug 31, 2022 at 12:02 PM Kent Overstreet > <kent.overstreet@xxxxxxxxx> wrote: > > > > On Wed, Aug 31, 2022 at 12:47:32PM +0200, Michal Hocko wrote: > > > On Wed 31-08-22 11:19:48, Mel Gorman wrote: > > > > Whatever asking for an explanation as to why equivalent functionality > > > > cannot not be created from ftrace/kprobe/eBPF/whatever is reasonable. > > > > > > Fully agreed and this is especially true for a change this size > > > 77 files changed, 3406 insertions(+), 703 deletions(-) > > > > In the case of memory allocation accounting, you flat cannot do this with ftrace > > - you could maybe do a janky version that isn't fully accurate, much slower, > > more complicated for the developer to understand and debug and more complicated > > for the end user. > > > > But please, I invite anyone who's actually been doing this with ftrace to > > demonstrate otherwise. > > > > Ftrace just isn't the right tool for the job here - we're talking about adding > > per callsite accounting to some of the fastest fast paths in the kernel. > > > > And the size of the changes for memory allocation accounting are much more > > reasonable: > > 33 files changed, 623 insertions(+), 99 deletions(-) > > > > The code tagging library should exist anyways, it's been open coded half a dozen > > times in the kernel already. > > > > And once we've got that, the time stats code is _also_ far simpler than doing it > > with ftrace would be. If anyone here has successfully debugged latency issues > > with ftrace, I'd really like to hear it. Again, for debugging latency issues you > > want something that can always be on, and that's not cheap with ftrace - and > > never mind the hassle of correlating start and end wait trace events, builting > > up histograms, etc. - that's all handled here. > > > > Cheap, simple, easy to use. What more could you want? > > > > This is very interesting work! Do you have any data about the overhead > this introduces, especially in a production environment? I am > especially interested in memory allocations tracking and detecting > leaks. I had the numbers for my previous implementation, before we started using the lazy percpu counters but that would not apply to the new implementation. I'll rerun the measurements and will post the exact numbers in a day or so. > (Sorry if you already posted this kind of data somewhere that I missed)