Re: [PATCH v4 4/6] vfio/type1: check dma map request is within a valid iova range

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 27 Feb 2018 09:26:37 +0100
Auger Eric <eric.auger@xxxxxxxxxx> wrote:

> Hi,
> On 27/02/18 00:13, Alex Williamson wrote:
> > On Mon, 26 Feb 2018 23:05:43 +0100
> > Auger Eric <eric.auger@xxxxxxxxxx> wrote:
> >   
> >> Hi Shameer,
> >>
> >> [Adding Robin in CC]
> >> On 21/02/18 13:22, Shameer Kolothum wrote:  
> >>> This checks and rejects any dma map request outside valid iova
> >>> range.
> >>>
> >>> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@xxxxxxxxxx>
> >>> ---
> >>>  drivers/vfio/vfio_iommu_type1.c | 22 ++++++++++++++++++++++
> >>>  1 file changed, 22 insertions(+)
> >>>
> >>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >>> index a80884e..3049393 100644
> >>> --- a/drivers/vfio/vfio_iommu_type1.c
> >>> +++ b/drivers/vfio/vfio_iommu_type1.c
> >>> @@ -970,6 +970,23 @@ static int vfio_pin_map_dma(struct vfio_iommu *iommu, struct vfio_dma *dma,
> >>>  	return ret;
> >>>  }
> >>>  
> >>> +/*
> >>> + * Check dma map request is within a valid iova range
> >>> + */
> >>> +static bool vfio_iommu_iova_dma_valid(struct vfio_iommu *iommu,
> >>> +				dma_addr_t start, dma_addr_t end)
> >>> +{
> >>> +	struct list_head *iova = &iommu->iova_list;
> >>> +	struct vfio_iova *node;
> >>> +
> >>> +	list_for_each_entry(node, iova, list) {
> >>> +		if ((start >= node->start) && (end <= node->end))
> >>> +			return true;    
> >> I am now confused by the fact this change will prevent existing QEMU
> >> from working with this series on some platforms. For instance QEMU virt
> >> machine GPA space collides with Seattle PCI host bridge windows. On ARM
> >> the smmu and smmuv3 drivers report the PCI host bridge windows as
> >> reserved regions which does not seem to be the case on other platforms.
> >> The change happened in commit 273df9635385b2156851c7ee49f40658d7bcb29d
> >> (iommu/dma: Make PCI window reservation generic).
> >>
> >> For background, we already discussed the topic after LPC 2016. See
> >> https://www.spinics.net/lists/kernel/msg2379607.html.
> >>
> >> So is it the right choice to expose PCI host bridge windows as reserved
> >> regions? If yes shouldn't we make a difference between those and MSI
> >> windows in this series and do not reject any user space DMA_MAP attempt
> >> within PCI host bridge windows.  
> > 
> > If the QEMU machine GPA collides with a reserved region today, then
> > either:
> > 
> > a) The mapping through the IOMMU works and the reserved region is wrong
> > 
> > or
> > 
> > b) The mapping doesn't actually work, QEMU is at risk of data loss by
> > being told that it worked, and we're justified in changing that
> > behavior.
> > 
> > Without knowing the specifics of SMMU, it doesn't particularly make
> > sense to me to mark the entire PCI hierarchy MMIO range as reserved,
> > unless perhaps the IOMMU is incapable of translating those IOVAs.  
> to me the limitation does not come from the smmu itself, which is a
> separate HW block sitting between the root complex and the interconnect.
> If ACS is not enforced by the PCIe subsystem, the transaction will never
> reach the IOMMU.

If the limitation is not from the SMMU, then why is it being exposed
via the IOMMU API?  This seems like overreach, trying to compensate for
a limitation elsewhere by imposing a restriction at the IOMMU.

> In the case of such overlap, shouldn't we just warn the end-user that
> this situation is dangerous instead of forbidding the use case which
> worked "in most cases" until now.

How do we distinguish between reserved ranges that are really reserved
and those that are only an advisory?  This seems like it defeats the
whole purpose of the reserved ranges.  Furthermore, if our vfio IOVA
list to the user is only advisory, what's the point?

Peer-to-peer MMIO within an IOMMU group is a tough problem, and one
that we've mostly ignored as we strive towards singleton IOMMU groups,
which are more the normal case on "enterprise" x86 hardware.  The user
does have some ability to determine potential conflicts, so I don't
necessarily see this as exclusively a kernel issue to solve.  However,
if the user needs to account for potentially conflicting MMIO outside of
the IOMMU group which they're provided, then yeah, we have a bigger
issue.  Thanks,

Alex



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux