On Fri, Oct 20, 2023 at 03:43:57PM +0100, Joao Martins wrote: > On 20/10/2023 00:59, Jason Gunthorpe wrote: > > On Thu, Oct 19, 2023 at 12:58:29PM +0100, Joao Martins wrote: > >> AMD has no such behaviour, though that driver per your earlier suggestion might > >> need to wait until -rc1 for some of the refactorings get merged. Hopefully we > >> don't need to wait for the last 3 series of AMD Driver refactoring (?) to be > >> done as that looks to be more SVA related; Unless there's something more > >> specific you are looking for prior to introducing AMD's domain_alloc_user(). > > > > I don't think we need to wait, it just needs to go on the cleaning list. > > > > I am not sure I followed. This suggests an post-merge cleanups, which goes in > different direction of your original comment? But maybe I am just not parsing it > right (sorry, just confused) Yes post merge for the weirdo alloc flow > >>> for themselves; so more and more I need to work on something like > >>> iommufd_log_perf tool under tools/testing that is similar to the gup_perf to make all > >>> performance work obvious and 'standardized' > > > > We have a mlx5 vfio driver in rdma-core and I have been thinking it > > would be a nice basis for building an iommufd tester/benchmarker as it > > has a wide set of "easilly" triggered functionality. > > Oh woah, that's quite awesome; I'll take a closer look; I thought rdma-core > support for mlx5-vfio was to do direct usage of the firmware interface, but it > appears to be for regular RDMA apps as well. I do use some RDMA to exercise > iommu dirty tracking; but it's more like a rudimentary test inside the guest, > not something self-contained. I can't remember anymore how much is supported, but supporting more is not hard work. With a simple QP/CQ you can do all sorts of interesting DMA. Yishai would remember if QP/CQ got fully wired up Jason