On Sat 07-03-15 13:38:08, Sasha Levin wrote: [...] > [ 1573.730097] ? kasan_free_pages (mm/kasan/kasan.c:301) > [ 1573.788680] free_pages_prepare (mm/page_alloc.c:791) > [ 1573.788680] ? free_hot_cold_page (./arch/x86/include/asm/paravirt.h:809 (discriminator 2) mm/page_alloc.c:1579 (discriminator 2)) > [ 1573.788680] free_hot_cold_page (mm/page_alloc.c:1543) > [ 1573.788680] __free_pages (mm/page_alloc.c:2957) > [ 1573.788680] ? __vunmap (mm/vmalloc.c:1460 (discriminator 2)) > [ 1573.788680] __vunmap (mm/vmalloc.c:1460 (discriminator 2)) __vunmap is doing: for (i = 0; i < area->nr_pages; i++) { struct page *page = area->pages[i]; BUG_ON(!page); __free_page(page); } is it possible that nr_pages is a huge number (a large vmalloc area)? I do not see any cond_resched down __free_page path at least. vfree delayes the call to workqueue when called from irq context and vunmap is marked as might_sleep). So to me it looks like it would be safe. Something for vmalloc familiar people, though. Anyway, the loop seems to be there since ages so I guess somebody just started calling vmalloc for huge areas recently so it shown up. -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>