On Mon, Sep 11, 2023 at 03:03:45PM -0400, James Bottomley wrote: > On Sun, 2023-09-10 at 23:10 -0400, Theodore Ts'o wrote: > > On Sun, Sep 10, 2023 at 03:51:42PM -0400, James Bottomley wrote: > [...] > > > Perhaps we should also go back to seeing if we can prize some > > > resources out of the major moneymakers in the cloud space. After > > > all, a bug that could cause a cloud exploit might not be even > > > exploitable on a personal laptop that has no untrusted users. > > > > Actually, I'd say this is backwards. Many of these issues, and I'd > > argue all that involve an maliciously corrupted file system, are not > > actually an issue in the cloud space, because we *already* assume > > that the attacker may have root. After all, anyone can pay their $5 > > CPU/hour, and get an Amazon or Google or Azure VM, and then run > > arbitrary workloads as root. > > Well, that was just one example. Another way cloud companies could > potentially help is their various AI projects: I seem to get daily > requests from AI people for me to tell them just how AI could help > Linux. When I suggest bug report triage and classification would be my > number one thing, they all back off faster than a mouse crashing a cat > convention with claims like "That's too hard a problem" and also that > in spite of ChatGPT getting its facts wrong and spewing rubbish for > student essays, it wouldn't survive the embarrassment of being > ridiculed by kernel developers for misclassifying bug reports. No fucking way. Just because you can do something it doesn't make it right or ethical. It is not ethical to experiment on human subjects without their consent. When someone asks the maintainer of a bot to stop doing something because it is causing harm to people, then ethics dictate that the bot should be *stopped immediately* regardless of whatever other benefits it might have. This is one of the major problems with syzbot: we can't get it turned off even though it is clearly doing harm to people. We didn't consent to being subject to the constant flood of issues that it throws our way, and despite repeated requests for it to be changed or stopped to reduce the harm it is doing the owners of the bot refuse to change anything. If anything, they double down and make things worse for the people they send bug reports to (e.g. by adding explicit writes to the block device under mounted mounted filesystems). In this context, the bot and it's owners need to be considered rogue actors. The owners of the bot just don't seem to care about the harm it is doing and largely refuse to do anything to reduce that harm. Suggesting that the solution to the harm a rogue testing bot is causing people in the community is that we should to subject those same people to *additional AI-based bug reporting experiments without their consent* is beyond my comprehension. > I'm not sure peer pressure works on the AI community, but surely if > enough of us asked, they might one day overcome their fear of trying it > ... Fear isn't an issue here. Anyone with even a moderate concern about ethics understands that you do not experiment on people without their explicit consent (*cough* UoM and hypocrite commits *cough*). Subjecting mailing lists to experimental AI generated bug reports without explicit opt-in consent from the people who receive those bug reports is really a total non-starter. Testing bots aren't going away any time soon, but new bots - especially experimental ones - really need to be opt-in. We most certainly do not need a repeat of the uncooperative, hostile "we've turned it on and you can't opt out" model that syzbot uses... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx