On Fri, Jan 24, 2025 at 08:22:57PM +0800, Kun Hu wrote: > > But an interesting interaction relationship is that for researchers > from academia to prove the advanced technology of their fuzzer, they > seem to need to use their personal finding of real-world bugs as an > important experimental metric. I think that's why you get reports > that are modeled after syzbot (the official description of syzkaller > describes the process for independent reports). If the quality of > the individual reports is low, it does affect the judgment of the > maintainer, but also it is also a waste of everyone's time. If you're going to do this, I would suggest that you make sure that you're actually finding new bugs. More often than not, after wasting a huge amount of time of upstream developers because the researchers don't set up the syzbot dashboard and e-mail service to test to see if a patch fixes a bug (often which we can't reproduce on our own because it's highly environment sensitive), we discover that it's already been reported on the upstream syzbot. Which makes the academic syzbot a *double* waste of time, and this trains upstream developers to simply ignore reports from these research forks of syzbot unless it comes with a reproducer, or maybe (this hasn't happened yet) if the researchers actually set up the web dashboard and e-mail responder. I also blame the peer reviewers for the journals, for not asking the question, "why haven't you shown that the 'real world' bugs that your forked syzbot has found are ones that the original syzcaller hasn't found yet?" And for not demanding of the academics, "if you want *real* impact, get your changes merged upstream with the upstream syzcaller, so that it will continue to find and fix vulnerabilities instead of ceasing the moment we accept your paper." Cheers, - Ted