On 2024/10/27 22:21, Leon Romanovsky wrote:
+/** + * dma_iova_sync - Sync IOTLB + * @dev: DMA device + * @state: IOVA state + * @offset: offset into the IOVA state to sync + * @size: size of the buffer + * @ret: return value from the last IOVA operation + * + * Sync IOTLB for the given IOVA state. This function should be called on + * the IOVA-contigous range created by one ore more dma_iova_link() calls + * to sync the IOTLB. + */ +int dma_iova_sync(struct device *dev, struct dma_iova_state *state, + size_t offset, size_t size, int ret) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + dma_addr_t addr = state->addr + offset; + size_t iova_start_pad = iova_offset(iovad, addr); + + addr -= iova_start_pad; + size = iova_align(iovad, size + iova_start_pad); + + if (!ret) + ret = iommu_sync_map(domain, addr, size); + if (ret) + iommu_unmap(domain, addr, size);
It appears strange that mapping is not done in this helper, but unmapping is added in the failure path. Perhaps I overlooked anything? To my understanding, it should like below: return iommu_sync_map(domain, addr, size); In the drivers that make use of this interface should do something like below: ret = dma_iova_sync(...); if (ret) dma_iova_destroy(...)
+ return ret; +} +EXPORT_SYMBOL_GPL(dma_iova_sync);
Thanks, baolu