On Fri, Nov 29, 2019 at 1:09 PM Daniel Axtens <dja@xxxxxxxxxx> wrote: > > Hi Dmitry, > > >> I am testing this support on next-20191129 and seeing the following warnings: > >> > >> BUG: sleeping function called from invalid context at mm/page_alloc.c:4681 > >> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 44, name: kworker/1:1 > >> 4 locks held by kworker/1:1/44: > >> #0: ffff888067c26d28 ((wq_completion)events){+.+.}, at: > >> __write_once_size include/linux/compiler.h:247 [inline] > >> #0: ffff888067c26d28 ((wq_completion)events){+.+.}, at: > >> arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] > >> #0: ffff888067c26d28 ((wq_completion)events){+.+.}, at: atomic64_set > >> include/asm-generic/atomic-instrumented.h:868 [inline] > >> #0: ffff888067c26d28 ((wq_completion)events){+.+.}, at: > >> atomic_long_set include/asm-generic/atomic-long.h:40 [inline] > >> #0: ffff888067c26d28 ((wq_completion)events){+.+.}, at: set_work_data > >> kernel/workqueue.c:615 [inline] > >> #0: ffff888067c26d28 ((wq_completion)events){+.+.}, at: > >> set_work_pool_and_clear_pending kernel/workqueue.c:642 [inline] > >> #0: ffff888067c26d28 ((wq_completion)events){+.+.}, at: > >> process_one_work+0x88b/0x1750 kernel/workqueue.c:2235 > >> #1: ffffc900002afdf0 (pcpu_balance_work){+.+.}, at: > >> process_one_work+0x8c0/0x1750 kernel/workqueue.c:2239 > >> #2: ffffffff8943f080 (pcpu_alloc_mutex){+.+.}, at: > >> pcpu_balance_workfn+0xcc/0x13e0 mm/percpu.c:1845 > >> #3: ffffffff89450c78 (vmap_area_lock){+.+.}, at: spin_lock > >> include/linux/spinlock.h:338 [inline] > >> #3: ffffffff89450c78 (vmap_area_lock){+.+.}, at: > >> pcpu_get_vm_areas+0x1449/0x3df0 mm/vmalloc.c:3431 > >> Preemption disabled at: > >> [<ffffffff81a84199>] spin_lock include/linux/spinlock.h:338 [inline] > >> [<ffffffff81a84199>] pcpu_get_vm_areas+0x1449/0x3df0 mm/vmalloc.c:3431 > >> CPU: 1 PID: 44 Comm: kworker/1:1 Not tainted 5.4.0-next-20191129+ #5 > >> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.12.0-1 04/01/2014 > >> Workqueue: events pcpu_balance_workfn > >> Call Trace: > >> __dump_stack lib/dump_stack.c:77 [inline] > >> dump_stack+0x199/0x216 lib/dump_stack.c:118 > >> ___might_sleep.cold.97+0x1f5/0x238 kernel/sched/core.c:6800 > >> __might_sleep+0x95/0x190 kernel/sched/core.c:6753 > >> prepare_alloc_pages mm/page_alloc.c:4681 [inline] > >> __alloc_pages_nodemask+0x3cd/0x890 mm/page_alloc.c:4730 > >> alloc_pages_current+0x10c/0x210 mm/mempolicy.c:2211 > >> alloc_pages include/linux/gfp.h:532 [inline] > >> __get_free_pages+0xc/0x40 mm/page_alloc.c:4786 > >> kasan_populate_vmalloc_pte mm/kasan/common.c:762 [inline] > >> kasan_populate_vmalloc_pte+0x2f/0x1b0 mm/kasan/common.c:753 > >> apply_to_pte_range mm/memory.c:2041 [inline] > >> apply_to_pmd_range mm/memory.c:2068 [inline] > >> apply_to_pud_range mm/memory.c:2088 [inline] > >> apply_to_p4d_range mm/memory.c:2108 [inline] > >> apply_to_page_range+0x5ca/0xa00 mm/memory.c:2133 > >> kasan_populate_vmalloc+0x69/0xa0 mm/kasan/common.c:791 > >> pcpu_get_vm_areas+0x1596/0x3df0 mm/vmalloc.c:3439 > >> pcpu_create_chunk+0x240/0x7f0 mm/percpu-vm.c:340 > >> pcpu_balance_workfn+0x1033/0x13e0 mm/percpu.c:1934 > >> process_one_work+0x9b5/0x1750 kernel/workqueue.c:2264 > >> worker_thread+0x8b/0xd20 kernel/workqueue.c:2410 > >> kthread+0x365/0x450 kernel/kthread.c:255 > >> ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352 > >> > >> > >> Not sure if it's the same or not. Is it addressed by something in flight? > > It looks like this one is the same. > > There is a patch to fix it: > https://lore.kernel.org/linux-mm/20191120052719.7201-1-dja@xxxxxxxxxx/ > > Andrew said he had picked it up on the 22nd: > https://marc.info/?l=linux-mm-commits&m=157438241512561&w=2 > It's landed in mmots but not mmotm, so hopefully that will happen and > then it will land in -next very soon! > > I will look into your other bug report shortly. Thanks for the quick responses, Andrey, Daniel. > Regards, > Daniel > > >> > >> My config: > >> https://gist.githubusercontent.com/dvyukov/36c7be311fdec9cd51c649f7c3cb2ddb/raw/39c6f864fdd0ffc53f0822b14c354a73c1695fa1/gistfile1.txt > > > > > > I've tried this fix for pcpu_get_vm_areas: > > https://groups.google.com/d/msg/kasan-dev/t_F2X1MWKwk/h152Z3q2AgAJ > > and it helps. But this will break syzbot on linux-next soon.