From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> v5.4.109-rt56-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 87bd0bf324f4c5468ea3d1de0482589f491f3145 ] The location tracking cache has a size of a page and is resized if its current size is too small. This allocation happens with disabled interrupts and can't happen on PREEMPT_RT. Should one page be too small, then we have to allocate more at the beginning. The only downside is that less callers will be visible. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Signed-off-by: Tom Zanussi <zanussi@xxxxxxxxxx> --- mm/slub.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index 1815e28852fe..0d78368d149a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4647,6 +4647,9 @@ static int alloc_loc_track(struct loc_track *t, unsigned long max, gfp_t flags) struct location *l; int order; + if (IS_ENABLED(CONFIG_PREEMPT_RT) && flags == GFP_ATOMIC) + return 0; + order = get_order(sizeof(struct location) * max); l = (void *)__get_free_pages(flags, order); -- 2.17.1