Greg, the patch is clear about its dependecy for pre-4.7 kernels. I do not see f86e4271978b queued for the stable tree though. Is it just me not seeing it or your automation doesn't check for such dependencies? On Wed 22-11-17 09:37:57, Greg KH wrote: > >From e492080e640c2d1235ddf3441cae634cfffef7e1 Mon Sep 17 00:00:00 2001 > From: Jaewon Kim <jaewon31.kim@xxxxxxxxxxx> > Date: Wed, 15 Nov 2017 17:39:07 -0800 > Subject: [PATCH] mm/page_ext.c: check if page_ext is not prepared > > online_page_ext() and page_ext_init() allocate page_ext for each > section, but they do not allocate if the first PFN is !pfn_present(pfn) > or !pfn_valid(pfn). Then section->page_ext remains as NULL. > lookup_page_ext checks NULL only if CONFIG_DEBUG_VM is enabled. For a > valid PFN, __set_page_owner will try to get page_ext through > lookup_page_ext. Without CONFIG_DEBUG_VM lookup_page_ext will misuse > NULL pointer as value 0. This incurrs invalid address access. > > This is the panic example when PFN 0x100000 is not valid but PFN > 0x13FC00 is being used for page_ext. section->page_ext is NULL, > get_entry returned invalid page_ext address as 0x1DFA000 for a PFN > 0x13FC00. > > To avoid this panic, CONFIG_DEBUG_VM should be removed so that page_ext > will be checked at all times. > > Unable to handle kernel paging request at virtual address 01dfa014 > ------------[ cut here ]------------ > Kernel BUG at ffffff80082371e0 [verbose debug info unavailable] > Internal error: Oops: 96000045 [#1] PREEMPT SMP > Modules linked in: > PC is at __set_page_owner+0x48/0x78 > LR is at __set_page_owner+0x44/0x78 > __set_page_owner+0x48/0x78 > get_page_from_freelist+0x880/0x8e8 > __alloc_pages_nodemask+0x14c/0xc48 > __do_page_cache_readahead+0xdc/0x264 > filemap_fault+0x2ac/0x550 > ext4_filemap_fault+0x3c/0x58 > __do_fault+0x80/0x120 > handle_mm_fault+0x704/0xbb0 > do_page_fault+0x2e8/0x394 > do_mem_abort+0x88/0x124 > > Pre-4.7 kernels also need commit f86e4271978b ("mm: check the return > value of lookup_page_ext for all call sites"). > > Link: http://lkml.kernel.org/r/20171107094131.14621-1-jaewon31.kim@xxxxxxxxxxx > Fixes: eefa864b701d ("mm/page_ext: resurrect struct page extending code for debugging") > Signed-off-by: Jaewon Kim <jaewon31.kim@xxxxxxxxxxx> > Acked-by: Michal Hocko <mhocko@xxxxxxxx> > Cc: Vlastimil Babka <vbabka@xxxxxxx> > Cc: Minchan Kim <minchan@xxxxxxxxxx> > Cc: Joonsoo Kim <js1304@xxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> [depends on f86e427197, see above] > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> > > diff --git a/mm/page_ext.c b/mm/page_ext.c > index 4f0367d472c4..2c16216c29b6 100644 > --- a/mm/page_ext.c > +++ b/mm/page_ext.c > @@ -125,7 +125,6 @@ struct page_ext *lookup_page_ext(struct page *page) > struct page_ext *base; > > base = NODE_DATA(page_to_nid(page))->node_page_ext; > -#if defined(CONFIG_DEBUG_VM) > /* > * The sanity checks the page allocator does upon freeing a > * page can reach here before the page_ext arrays are > @@ -134,7 +133,6 @@ struct page_ext *lookup_page_ext(struct page *page) > */ > if (unlikely(!base)) > return NULL; > -#endif > index = pfn - round_down(node_start_pfn(page_to_nid(page)), > MAX_ORDER_NR_PAGES); > return get_entry(base, index); > @@ -199,7 +197,6 @@ struct page_ext *lookup_page_ext(struct page *page) > { > unsigned long pfn = page_to_pfn(page); > struct mem_section *section = __pfn_to_section(pfn); > -#if defined(CONFIG_DEBUG_VM) > /* > * The sanity checks the page allocator does upon freeing a > * page can reach here before the page_ext arrays are > @@ -208,7 +205,6 @@ struct page_ext *lookup_page_ext(struct page *page) > */ > if (!section->page_ext) > return NULL; > -#endif > return get_entry(section->page_ext, pfn); > } > > -- Michal Hocko SUSE Labs