On Thu, 21 Sep 2017 17:38:53 +0200, Andrey Konovalov wrote: > > Hi! > > I've got the following report while fuzzing the kernel with syzkaller. > > On commit ebb2c2437d8008d46796902ff390653822af6cc4 (Sep 18). > > ------------[ cut here ]------------ > WARNING: CPU: 1 PID: 1846 at mm/page_alloc.c:3883 > __alloc_pages_slowpath+0x1ef2/0x2d70 > Modules linked in: > CPU: 1 PID: 1846 Comm: kworker/1:2 Not tainted > 4.14.0-rc1-42251-gebb2c2437d80 #215 > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 > Workqueue: usb_hub_wq hub_event > task: ffff8800643c18c0 task.stack: ffff88006b658000 > RIP: 0010:trace_reclaim_retry_zone ./include/trace/events/oom.h:31 > RIP: 0010:should_reclaim_retry mm/page_alloc.c:3783 > RIP: 0010:__alloc_pages_slowpath+0x1ef2/0x2d70 mm/page_alloc.c:4039 > RSP: 0018:ffff88006b65d938 EFLAGS: 00010246 > RAX: 00000000ffffa666 RBX: 00000000014000c0 RCX: 0000000000000000 > RDX: 0000000000000000 RSI: 0000000000000034 RDI: 000000000140c0c0 > RBP: ffff88006b65e088 R08: 0000000000000000 R09: fffffffffff00f88 > R10: 0000000000000000 R11: 0000000000000085 R12: ffff88006b65e130 > R13: ffff88006b65e270 R14: ffff88006b65e0f0 R15: 000000000140c0c0 > FS: 0000000000000000(0000) GS:ffff88006c900000(0000) knlGS:0000000000000000 > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > CR2: 0000000020829000 CR3: 0000000063cce000 CR4: 00000000000006e0 > Call Trace: > __alloc_pages_nodemask+0x921/0xf70 mm/page_alloc.c:4217 > alloc_pages_current+0xbb/0x1f0 mm/mempolicy.c:2035 > alloc_pages ./include/linux/gfp.h:505 > __get_free_pages+0x14/0x50 mm/page_alloc.c:4248 > usb_stream_new+0x50f/0x9f0 sound/usb/usx2y/usb_stream.c:214 The warning itself should be harmless, indicating only that the driver tries to allocate too high-order memory pages. The error path handles the allocation error gracefully, so basically we can suppress the warning by adding __GFP_NOWARN, in addition to a sanity check in the caller side. I've been traveling and will be traveling again in the next week, so I'll cook up once after back to work again. thanks, Takashi _______________________________________________ Alsa-devel mailing list Alsa-devel@xxxxxxxxxxxxxxxx http://mailman.alsa-project.org/mailman/listinfo/alsa-devel