On 3/6/24 19:24, Suren Baghdasaryan wrote: > From: Kent Overstreet <kent.overstreet@xxxxxxxxx> > > It seems we need to be more forceful with the compiler on this one. > This is done for performance reasons only. > > Signed-off-by: Kent Overstreet <kent.overstreet@xxxxxxxxx> > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> > Reviewed-by: Kees Cook <keescook@xxxxxxxxxxxx> > Reviewed-by: Pasha Tatashin <pasha.tatashin@xxxxxxxxxx> Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx> > --- > mm/slub.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 2ef88bbf56a3..0f3369f6188b 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2121,9 +2121,9 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) > return !kasan_slab_free(s, x, init); > } > > -static inline bool slab_free_freelist_hook(struct kmem_cache *s, > - void **head, void **tail, > - int *cnt) > +static __fastpath_inline > +bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail, > + int *cnt) > { > > void *object;