Hi Lorenzo, On Thu, Oct 31, 2024 at 5:59 PM 'Lorenzo Stoakes' via syzkaller-bugs <syzkaller-bugs@xxxxxxxxxxxxxxxx> wrote: > > +Alan re: USB stalls > > On Thu, Oct 31, 2024 at 09:41:02AM -0700, syzbot wrote: > > Hello, > > > > syzbot has tested the proposed patch and the reproducer did not trigger any issue: > > > > Reported-by: syzbot+7402e6c8042635c93ead@xxxxxxxxxxxxxxxxxxxxxxxxx > > Tested-by: syzbot+7402e6c8042635c93ead@xxxxxxxxxxxxxxxxxxxxxxxxx > > > > Tested on: > > > > commit: cffcc47b mm/mlock: set the correct prev on failure > > git tree: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git/ mm-hotfixes-unstable > > console output: https://syzkaller.appspot.com/x/log.txt?x=1304a630580000 > > kernel config: https://syzkaller.appspot.com/x/.config?x=6648774f7c39d413 > > dashboard link: https://syzkaller.appspot.com/bug?extid=7402e6c8042635c93ead > > compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40 > > > > Note: no patches were applied. > > Note: testing is done by a robot and is best-effort only. > > OK this seems likely to be intermittant (and unrelated to what's in > mm-unstable-fixes honestly) and does make me wonder if the fix referenced > in [0] really has sorted things out? Or whether it has perhaps help > mitigate the issue but not sufficiently in conjunction with debug things > that slow things down. > > Because we keep getting these reports, that mysteriously don't occur if we > re-run (or hit other code paths), they seem to hit somewhat arbitrary parts > of mm, and because CONFIG_DEBUG_VM_MAPLE_TREE is set we spend a _long_ time > in mm validating trees (this config option is REALLY REALLY heavy-handed). > > I note we also set CONFIG_KCOV and CONFIG_KCOV_INSTRUMENT_ALL which isn't > going to make anything quicker if the USB gets laggy. These are necessary for coverage-guided fuzzing. Though when we find and run reproducers, we don't actually set up /dev/kcov, so I guess the impact of coverage callbacks here is not that significant here. CONFIG_KASAN is likely slowing down things much more. > > I'm not sure if there's a human who can help tweak the config for these > hardware-centric tests at Google? At least tweaking the RCU stall time > anyway? We currently set: CONFIG_RCU_CPU_STALL_TIMEOUT=100 CONFIG_RCU_EXP_CPU_STALL_TIMEOUT=21000 The expedited RCU timeout was limited to 21 seconds up to some time ago, but I guess now we can safely increase this number as well. I'll send a PR with syzbot config updates. -- Aleksandr > > In any case this continues not to look likely to be an actual mm issue as > far as I can see. > > In [0] we were stalled in a validate call which would align with the idea > that perhaps we were just dealing with a very very big tree and getting > slow down that way. > > Cheers, Lorenzo > > [0]:https://lore.kernel.org/all/967f3aa0-447a-4121-b80b-299c926a33f5@xxxxxxxxxxxxxxxxxxx/ >