This is a note to let you know that I've just added the patch titled iommu/virtio: Return size mapped for a detached domain to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: iommu-virtio-return-size-mapped-for-a-detached-domai.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 911c6aa4a7df241884963c0861be4866caac935f Author: Jean-Philippe Brucker <jean-philippe@xxxxxxxxxx> Date: Mon May 15 12:39:50 2023 +0100 iommu/virtio: Return size mapped for a detached domain [ Upstream commit 7061b6af34686e7e2364b7240cfb061293218f2d ] When map() is called on a detached domain, the domain does not exist in the device so we do not send a MAP request, but we do update the internal mapping tree, to be replayed on the next attach. Since this constitutes a successful iommu_map() call, return *mapped in this case too. Fixes: 7e62edd7a33a ("iommu/virtio: Add map/unmap_pages() callbacks implementation") Signed-off-by: Jean-Philippe Brucker <jean-philippe@xxxxxxxxxx> Reviewed-by: Jason Gunthorpe <jgg@xxxxxxxxxx> Link: https://lore.kernel.org/r/20230515113946.1017624-3-jean-philippe@xxxxxxxxxx Signed-off-by: Joerg Roedel <jroedel@xxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c index fe02ac772b651..fd86ccb709ec5 100644 --- a/drivers/iommu/virtio-iommu.c +++ b/drivers/iommu/virtio-iommu.c @@ -834,25 +834,26 @@ static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova, if (ret) return ret; - map = (struct virtio_iommu_req_map) { - .head.type = VIRTIO_IOMMU_T_MAP, - .domain = cpu_to_le32(vdomain->id), - .virt_start = cpu_to_le64(iova), - .phys_start = cpu_to_le64(paddr), - .virt_end = cpu_to_le64(end), - .flags = cpu_to_le32(flags), - }; - - if (!vdomain->nr_endpoints) - return 0; + if (vdomain->nr_endpoints) { + map = (struct virtio_iommu_req_map) { + .head.type = VIRTIO_IOMMU_T_MAP, + .domain = cpu_to_le32(vdomain->id), + .virt_start = cpu_to_le64(iova), + .phys_start = cpu_to_le64(paddr), + .virt_end = cpu_to_le64(end), + .flags = cpu_to_le32(flags), + }; - ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); - if (ret) - viommu_del_mappings(vdomain, iova, end); - else if (mapped) + ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); + if (ret) { + viommu_del_mappings(vdomain, iova, end); + return ret; + } + } + if (mapped) *mapped = size; - return ret; + return 0; } static size_t viommu_unmap_pages(struct iommu_domain *domain, unsigned long iova,