On Sat, 2022-05-07 at 16:52 +0800, yf.wang@xxxxxxxxxxxx wrote: > From: Yunfei Wang <yf.wang@xxxxxxxxxxxx> > > The data type of the return value of the iommu_map_sg_atomic > is ssize_t, but the data type of iova size is size_t, > e.g. one is int while the other is unsigned int. > > When iommu_map_sg_atomic return value is compared with iova size, > it will force the signed int to be converted to unsigned int, if > iova map fails and iommu_map_sg_atomic return error code is less > than 0, then (ret < iova_len) is false, which will to cause not > do free iova, and the master can still successfully get the iova > of map fail, which is not expected. > > Therefore, we need to check the return value of iommu_map_sg_atomic > in two cases according to whether it is less than 0. > > Fixes: ad8f36e4b6b1 ("iommu: return full error code from > iommu_map_sg[_atomic]()") > Signed-off-by: Yunfei Wang <yf.wang@xxxxxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> # 5.15.* > --- > drivers/iommu/dma-iommu.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 09f6e1c0f9c0..2932281e93fc 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -776,6 +776,7 @@ static struct page > **__iommu_dma_alloc_noncontiguous(struct device *dev, > unsigned int count, min_size, alloc_sizes = domain- > >pgsize_bitmap; > struct page **pages; > dma_addr_t iova; > + ssize_t ret; > > if (static_branch_unlikely(&iommu_deferred_attach_enabled) && > iommu_deferred_attach(dev, domain)) > @@ -813,8 +814,8 @@ static struct page > **__iommu_dma_alloc_noncontiguous(struct device *dev, > arch_dma_prep_coherent(sg_page(sg), sg- > >length); > } > > - if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt- > >orig_nents, ioprot) > - < size) > + ret = iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt- > >orig_nents, ioprot); > + if (ret < 0 || ret < size) if (IS_ERR_VALUE(ret) || ret < size) for readable? > goto out_free_sg; > > sgt->sgl->dma_address = iova; > @@ -1209,7 +1210,7 @@ static int iommu_dma_map_sg(struct device *dev, > struct scatterlist *sg, > * implementation - it knows better than we do. > */ > ret = iommu_map_sg_atomic(domain, iova, sg, nents, prot); > - if (ret < iova_len) > + if (ret < 0 || ret < iova_len) > goto out_free_iova; > > return __finalise_sg(dev, sg, nents, iova);