On Fri, Jun 26, 2020 at 4:29 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > On Thu, Jun 25, 2020 at 03:40:42PM -0700, Sami Tolvanen wrote: > > > > Not boot tested, but it generates the required sections and they look > > > more or less as expected, ymmv. > > > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > > > index a291823f3f26..189575c12434 100644 > > > --- a/arch/x86/Kconfig > > > +++ b/arch/x86/Kconfig > > > @@ -174,7 +174,6 @@ config X86 > > > select HAVE_EXIT_THREAD > > > select HAVE_FAST_GUP > > > select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE > > > - select HAVE_FTRACE_MCOUNT_RECORD > > > select HAVE_FUNCTION_GRAPH_TRACER > > > select HAVE_FUNCTION_TRACER > > > select HAVE_GCC_PLUGINS > > > > This breaks DYNAMIC_FTRACE according to kernel/trace/ftrace.c: > > > > #ifndef CONFIG_FTRACE_MCOUNT_RECORD > > # error Dynamic ftrace depends on MCOUNT_RECORD > > #endif > > > > And the build errors after that seem to confirm this. It looks like we might > > need another flag to skip recordmcount. > > Hurm, Steve, how you want to do that? Steven, did you have any thoughts about this? Moving recordmcount to an objtool pass that knows about call sites feels like a much cleaner solution than annotating kernel code to avoid unwanted relocations. Sami