On Wed, Feb 19, 2020 at 10:36:14AM -0500, Steven Rostedt wrote: > On Wed, 19 Feb 2020 15:47:28 +0100 > Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > > --- a/arch/x86/lib/memcpy_32.c > > +++ b/arch/x86/lib/memcpy_32.c > > @@ -21,7 +21,7 @@ __visible void *memset(void *s, int c, s > > } > > EXPORT_SYMBOL(memset); > > > > -__visible void *memmove(void *dest, const void *src, size_t n) > > +__visible notrace void *memmove(void *dest, const void *src, size_t n) > > { > > int d0,d1,d2,d3,d4,d5; > > char *ret = dest; > > @@ -207,3 +207,8 @@ __visible void *memmove(void *dest, cons > > > > } > > EXPORT_SYMBOL(memmove); > > Hmm, for things like this, which is adding notrace because of a single > instance of it (although it is fine to trace in any other instance), it > would be nice to have a gcc helper that could call "memmove+5" which > would skip the tracing portion. Or just open-code the memmove() in do_double_fault() I suppose. I don't think we care about super optimized code there. It's the bloody ESPFIX trainwreck.