On Fri, 07 Aug 2020 14:14:48 -0600 Alex Williamson <alex.williamson@xxxxxxxxxx> wrote: > The vfio_iommu_replay() function does not currently unwind on error, > yet it does pin pages, perform IOMMU mapping, and modify the vfio_dma > structure to indicate IOMMU mapping. The IOMMU mappings are torn down > when the domain is destroyed, but the other actions go on to cause > trouble later. For example, the iommu->domain_list can be empty if we > only have a non-IOMMU backed mdev attached. We don't currently check > if the list is empty before getting the first entry in the list, which > leads to a bogus domain pointer. If a vfio_dma entry is erroneously > marked as iommu_mapped, we'll attempt to use that bogus pointer to > retrieve the existing physical page addresses. > > This is the scenario that uncovered this issue, attempting to hot-add > a vfio-pci device to a container with an existing mdev device and DMA > mappings, one of which could not be pinned, causing a failure adding > the new group to the existing container and setting the conditions > for a subsequent attempt to explode. > > To resolve this, we can first check if the domain_list is empty so > that we can reject replay of a bogus domain, should we ever encounter > this inconsistent state again in the future. The real fix though is > to add the necessary unwind support, which means cleaning up the > current pinning if an IOMMU mapping fails, then walking back through > the r-b tree of DMA entries, reading from the IOMMU which ranges are > mapped, and unmapping and unpinning those ranges. To be able to do > this, we also defer marking the DMA entry as IOMMU mapped until all > entries are processed, in order to allow the unwind to know the > disposition of each entry. > > Fixes: a54eb55045ae ("vfio iommu type1: Add support for mediated devices") > Signed-off-by: Alex Williamson <alex.williamson@xxxxxxxxxx> > --- > drivers/vfio/vfio_iommu_type1.c | 71 ++++++++++++++++++++++++++++++++++++--- > 1 file changed, 66 insertions(+), 5 deletions(-) Reviewed-by: Cornelia Huck <cohuck@xxxxxxxxxx>