RE: [RFC PATCH 0/3] VFIO: Report IOMMU fault event to userspace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Loop IOMMU mailing list and add Ashok and Jacob.

> -----Original Message-----
> From: Lan, Tianyu
> Sent: Thursday, March 16, 2017 9:43 AM
> To: Alex Williamson <alex.williamson@xxxxxxxxxx>; Liu, Yi L <yi.l.liu@xxxxxxxxx>
> Cc: kvm@xxxxxxxxxxxxxxx; Tian, Kevin <kevin.tian@xxxxxxxxx>; mst@xxxxxxxxxx;
> jan.kiszka@xxxxxxxxxxx; jasowang@xxxxxxxxxx; peterx@xxxxxxxxxx;
> david@xxxxxxxxxxxxxxxxxxxxx; Jean-Philippe.Brucker@xxxxxxx
> Subject: Re: [RFC PATCH 0/3] VFIO: Report IOMMU fault event to userspace
> 
> On 2017年03月16日 03:52, Alex Williamson wrote:
> > On Wed, 15 Mar 2017 06:17:06 +0000
> > "Liu, Yi L" <yi.l.liu@xxxxxxxxx> wrote:
> >
> >>> -----Original Message-----
> >>> From: Lan, Tianyu
> >>> Sent: Tuesday, February 28, 2017 11:58 PM
> >>> To: Alex Williamson <alex.williamson@xxxxxxxxxx>
> >>> Cc: kvm@xxxxxxxxxxxxxxx; Tian, Kevin <kevin.tian@xxxxxxxxx>;
> >>> mst@xxxxxxxxxx; jan.kiszka@xxxxxxxxxxx; jasowang@xxxxxxxxxx;
> >>> peterx@xxxxxxxxxx; david@xxxxxxxxxxxxxxxxxxxxx; Liu, Yi L
> >>> <yi.l.liu@xxxxxxxxx>; Jean- Philippe.Brucker@xxxxxxx
> >>> Subject: Re: [RFC PATCH 0/3] VFIO: Report IOMMU fault event to
> >>> userspace
> >>>
> >>> Hi Alex:
> >>> Does following comments make sense to you? In the previous
> >>> discussion, the type1 IOMMU driver isn't suitable for dynamic
> >>> map/umap and we should extend type1 or introduce type2. For fault
> >>> event reporting or future IOMMU related function, we need to figure
> >>> out they should be in the vfio-pci, vfio-IOMMU driver or something
> >>> else. SVM support in VM also will face such kind of choice. As Jean-Philippe
> posted SVM support for ARM, I think most platforms have such requirement. Thanks.
> >>
> >> Hello Alex,
> >>
> >> Do you have any further suggestion on where to place the reporting channel in
> VFIO?
> >> Seems like we have options includes: vfio-pci, vfio-IOMMU driver.
> >
> > Here's my thought process, I start out leaning towards vfio-pci
> > because the vfio container can actually handle multiple IOMMU domains,
> > each of which is theoretically hosted on different physical IOMMUs,
> > possibly by different vendors.  So we can't even guarantee that we
> > have a single vendor error format per container.  A device however
> > only maps through a single IOMMU and therefore only has a single error
> > format. Devices already support various interrupt and error signaling
> > mechanisms and we already have device specific regions which could be
> > used to expose some form of error log.  It also removes any sort of
> > source ID from the error report.
> 
> Agree.
> 
> >
> > Also I presume that this vIOMMU use case is not the only case where a
> > driver would want to be notified of IOMMU faults, in-kernel drivers
> > might want this too.  Drivers making use of the DMA API don't really
> > have any visibility to the IOMMU domain in use, so the framework we
> > use to connect drivers with the IOMMU faults probably needs to
> > abstract that.
> 
> Yes, device page request(part of SVM support) on native also requires that device
> driver to receive IOMMU fault event(page request event) from IOMMU driver. So it's
> necessary to add such abstract layer between IOMMU driver and device
> driver(include VFIO-PCI driver).
> 
> First in my mind, IOMMU core needs to provide fault event reporting notifier and a
> fault event
> 
> > Here's the problem though, in-kernel drivers are not going to accept
> > IOMMU vendor specific fault reporting.  So while we could have maybe
> > used device specific regions in vfio to report vendor specific faults,
> > that abstraction problem needs to be solved for any in-kernel user
> > anyway.
> 
> It looks we still need a common fault format to pass fault event between IOMMU
> and VFIO-PCI. One device also maybe used on different platforms and so device
> driver should not have such platform specific code to handle fault event. If we
> already have such common structure, fault event reporting from VFIO-PCI to Qemu
> also can reuse such structure.
> Otherwise, we have to check the fault event before passing it to vIOMMU since
> vIOMMU maybe belong to different platforms if we don't have limitation that
> vIOMMU should match with platform we are running.(E,G virtual vtd only can be
> used on the intel platform).
> 
> >
> > Now, if we go back and start from the premise that we have in-kernel
> > infrastructure to report IOMMU faults to drivers in a common,
> > non-vendor specific way, does that change my conclusion in the first
> > paragraph since not having a consistent error format was a
> > contributing factor.  It seems like a common error format is not the
> > only problem with a container hosting multiple domains though.  What
> > if we have a container where some domains are capable of reporting
> > faults and others are not.  Could a user positively determine that a
> > device is capable of reporting IOMMU faults in that case?  It seems
> > not.  So perhaps the vfio device is still the proper place to host
> > that reporting and we can simply leverage the common error reporting
> > in the host layer to expose similar common reporting to the user,
> > which also provides the benefit that the solution isn't locked to
> > matching physical IOMMU and vIOMMU from the same vendor.  Thanks,
> >
> > Alex
> >
> 
> 
> --
> Best regards
> Tianyu Lan




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux