On Tue, May 21, 2024 at 10:21:23AM -0600, Alex Williamson wrote: > > Intel GPU weirdness should not leak into making other devices > > insecure/slow. If necessary Intel GPU only should get some variant > > override to keep no snoop working. > > > > It would make alot of good sense if VFIO made the default to disable > > no-snoop via the config space. > > We can certainly virtualize the config space no-snoop enable bit, but > I'm not sure what it actually accomplishes. We'd then be relying on > the device to honor the bit and not have any backdoors to twiddle the > bit otherwise (where we know that GPUs often have multiple paths to get > to config space). I'm OK with this. If devices are insecure then they need quirks in vfio to disclose their problems, we shouldn't punish everyone who followed the spec because of some bad actors. But more broadly in a security engineered environment we can trust the no-snoop bit to work properly. > We also then have the question of does the device function > correctly if we disable no-snoop. Other than the GPU BW issue the no-snoop is not a functional behavior. > The more secure approach might be that we need to do these cache > flushes for any IOMMU that doesn't maintain coherency, even for > no-snoop transactions. Thanks, Did you mean 'even for snoop transactions'? That is where this series is, it assumes a no-snoop transaction took place even if that is impossible, because of config space, and then does pessimistic flushes. Jason