This is a note to let you know that I've just added the patch titled ftrace: Check if pages were allocated before calling free_pages() to the 4.19-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: ftrace-check-if-pages-were-allocated-before-calling-.patch and it can be found in the queue-4.19 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 82d0c3cbc9c2961a91d2463724758e7895fd11ff Author: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx> Date: Tue Mar 30 09:58:38 2021 -0400 ftrace: Check if pages were allocated before calling free_pages() [ Upstream commit 59300b36f85f254260c81d9dd09195fa49eb0f98 ] It is possible that on error pg->size can be zero when getting its order, which would return a -1 value. It is dangerous to pass in an order of -1 to free_pages(). Check if order is greater than or equal to zero before calling free_pages(). Link: https://lore.kernel.org/lkml/20210330093916.432697c7@xxxxxxxxxxxxxxxxxx/ Reported-by: Abaci Robot <abaci@xxxxxxxxxxxxxxxxx> Signed-off-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx> Stable-dep-of: 26efd79c4624 ("ftrace: Fix possible warning on checking all pages used in ftrace_process_locs()") Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 48ab4d750c650..1b92a22086f50 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -3095,7 +3095,8 @@ ftrace_allocate_pages(unsigned long num_to_init) pg = start_pg; while (pg) { order = get_count_order(pg->size / ENTRIES_PER_PAGE); - free_pages((unsigned long)pg->records, order); + if (order >= 0) + free_pages((unsigned long)pg->records, order); start_pg = pg->next; kfree(pg); pg = start_pg; @@ -5843,7 +5844,8 @@ void ftrace_release_mod(struct module *mod) clear_mod_from_hashes(pg); order = get_count_order(pg->size / ENTRIES_PER_PAGE); - free_pages((unsigned long)pg->records, order); + if (order >= 0) + free_pages((unsigned long)pg->records, order); tmp_page = pg->next; kfree(pg); ftrace_number_of_pages -= 1 << order; @@ -6192,7 +6194,8 @@ void ftrace_free_mem(struct module *mod, void *start_ptr, void *end_ptr) if (!pg->index) { *last_pg = pg->next; order = get_count_order(pg->size / ENTRIES_PER_PAGE); - free_pages((unsigned long)pg->records, order); + if (order >= 0) + free_pages((unsigned long)pg->records, order); ftrace_number_of_pages -= 1 << order; ftrace_number_of_groups--; kfree(pg);