On Wed, Jul 10, 2024 at 10:05:14AM -0300, Jason Gunthorpe wrote: > On Tue, Jul 09, 2024 at 12:43:50PM -0700, Dan Williams wrote: > > > A "Command Effects Log" seems like that starting point, with trust that > > cynical abuses of that contract have a higher cost than benefit, and > > trust that the protocol limits the potential damage of such abuse. > > I've taken the view that companies are now very vigilant about > security and often have their own internal incentives and procedures > to do secure things. > > If someone does a cynical security breaking thing and deploys it to a > wide user base they are likely to be caught by a security researcher > and embarassed with a CVE and a web site with a snappy name. That may be the case in the server world, and for protocols such as NVMe. My experience in the media world differs. I've seen too many horrors to list them all here, so I'll only mention one of the worst examples coming to my mind, of an (BSP) driver taking a physical address from unpriviledged userspace and giving it to a DMA engine without any filtering. I think this was mostly to be blamed on the developer not knowing better, there was no malicious intent. In general, can we trust closed-source firmwares when they document the side effects of pass-through commands ? Again, I think the answer differs between different classes of devices, the security culture is not uniform across the whole IT industry. > Not 100% of course, but it is certainly not a wild west of people just > doing whatever they want. > > The other half of this bargin is we have to be much clearer about what > the security model is and what is security breaking. Like Christoph I > often have conversations with people who don't understand the basics > of how the Linux security models should work and are doing device-side > work that has to fit into it. -- Regards, Laurent Pinchart