From: Kent Overstreet <kent.overstreet@xxxxxxxxx> It seems we need to be more forceful with the compiler on this one. This is done for performance reasons only. Signed-off-by: Kent Overstreet <kent.overstreet@xxxxxxxxx> Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> Reviewed-by: Kees Cook <keescook@xxxxxxxxxxxx> --- mm/slub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 2ef88bbf56a3..d31b03a8d9d5 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2121,7 +2121,7 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) return !kasan_slab_free(s, x, init); } -static inline bool slab_free_freelist_hook(struct kmem_cache *s, +static __always_inline bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail, int *cnt) { -- 2.44.0.rc0.258.g7320e95886-goog