[bug report] vfio: hugepage support for vfio_iommu_type1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Alex Williamson,

The patch 166fd7d94afd: "vfio: hugepage support for vfio_iommu_type1"
from Jun 21, 2013, leads to the following static checker warning:

	drivers/vfio/vfio_iommu_type1.c:819 vfio_dma_do_unmap()
	warn: overflowed symbol reused:  'unmap->size'

drivers/vfio/vfio_iommu_type1.c
   756  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
   757                               struct vfio_iommu_type1_dma_unmap *unmap)
   758  {
   759          uint64_t mask;
   760          struct vfio_dma *dma, *dma_last = NULL;
   761          size_t unmapped = 0;
   762          int ret = 0, retries = 0;
   763  
   764          mask = ((uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu))) - 1;
   765  
   766          if (unmap->iova & mask)
   767                  return -EINVAL;
   768          if (!unmap->size || unmap->size & mask)
   769                  return -EINVAL;

unmap is controlled by the user from the ioctl.  ->iova and ->size are
type u64.  It would be simple enough to add an integer overflow check
here, but I'd be tempted to do something like:

		if (unmap->size + unmap->iova < unmap->size ||
		    unmap->size + unmap->iova > SIZE_MAX)
			return -EINVAL;

Because below we are passing it to vfio_find_dma() which only takes a
size_t and not a u64.

I don't think it actually matters?  The heuristic here is that we look
for user triggered integer overflows and then we re-use the variables
again.  There are too many integer overflows (thousands) which aren't
harmful and I haven't yet figured out an easy way to filter for just
the harmful ones...

   770  
   771          WARN_ON(mask & PAGE_MASK);
   772  again:
   773          mutex_lock(&iommu->lock);
   774  
   775          /*
   776           * vfio-iommu-type1 (v1) - User mappings were coalesced together to
   777           * avoid tracking individual mappings.  This means that the granularity
   778           * of the original mapping was lost and the user was allowed to attempt
   779           * to unmap any range.  Depending on the contiguousness of physical
   780           * memory and page sizes supported by the IOMMU, arbitrary unmaps may
   781           * or may not have worked.  We only guaranteed unmap granularity
   782           * matching the original mapping; even though it was untracked here,
   783           * the original mappings are reflected in IOMMU mappings.  This
   784           * resulted in a couple unusual behaviors.  First, if a range is not
   785           * able to be unmapped, ex. a set of 4k pages that was mapped as a
   786           * 2M hugepage into the IOMMU, the unmap ioctl returns success but with
   787           * a zero sized unmap.  Also, if an unmap request overlaps the first
   788           * address of a hugepage, the IOMMU will unmap the entire hugepage.
   789           * This also returns success and the returned unmap size reflects the
   790           * actual size unmapped.
   791           *
   792           * We attempt to maintain compatibility with this "v1" interface, but
   793           * we take control out of the hands of the IOMMU.  Therefore, an unmap
   794           * request offset from the beginning of the original mapping will
   795           * return success with zero sized unmap.  And an unmap request covering
   796           * the first iova of mapping will unmap the entire range.
   797           *
   798           * The v2 version of this interface intends to be more deterministic.
   799           * Unmap requests must fully cover previous mappings.  Multiple
   800           * mappings may still be unmaped by specifying large ranges, but there
   801           * must not be any previous mappings bisected by the range.  An error
   802           * will be returned if these conditions are not met.  The v2 interface
   803           * will only return success and a size of zero if there were no
   804           * mappings within the range.
   805           */
   806          if (iommu->v2) {
   807                  dma = vfio_find_dma(iommu, unmap->iova, 1);
   808                  if (dma && dma->iova != unmap->iova) {
   809                          ret = -EINVAL;
   810                          goto unlock;
   811                  }
   812                  dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
   813                  if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
   814                          ret = -EINVAL;
   815                          goto unlock;
   816                  }
   817          }
   818  
   819          while ((dma = vfio_find_dma(iommu, unmap->iova, unmap->size))) {
   820                  if (!iommu->v2 && unmap->iova > dma->iova)
   821                          break;
   822                  /*
   823                   * Task with same address space who mapped this iova range is
   824                   * allowed to unmap the iova range.
   825                   */
   826                  if (dma->task->mm != current->mm)
   827                          break;
   828  
   829                  if (!RB_EMPTY_ROOT(&dma->pfn_list)) {
   830                          struct vfio_iommu_type1_dma_unmap nb_unmap;
   831  

regards,
dan carpenter



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux