This patch updates dma_direct_unmap_sg() to mark each scatter/gather entry invalid, after it's unmapped. This fixes two issues: 1. It makes the unmapping code able to tolerate a double unmap. 2. It prevents the NVMe driver from erroneously treating an unmapped DMA address as mapped. The bug that motivated this patch was the following sequence, which occurred within the NVMe driver, with the kernel flag `swiotlb=force`. * NVMe driver calls dma_direct_map_sg() * dma_direct_map_sg() fails part way through the scatter gather/list * dma_direct_map_sg() calls dma_direct_unmap_sg() to unmap any entries succeeded. * NVMe driver calls dma_direct_unmap_sg(), redundantly, leading to a double unmap, which is a bug. With this patch, a hadoop workload running on a cluster of three AMD SEV VMs, is able to succeed. Without the patch, the hadoop workload suffers application-level and even VM-level failures. Tested-by: Jianxiong Gao <jxgao@xxxxxxxxxx> Tested-by: Marc Orr <marcorr@xxxxxxxxxx> Reviewed-by: Jianxiong Gao <jxgao@xxxxxxxxxx> Signed-off-by: Marc Orr <marcorr@xxxxxxxxxx> --- kernel/dma/direct.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 0a4881e59aa7..3d9b17fe5771 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -374,9 +374,11 @@ void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl, struct scatterlist *sg; int i; - for_each_sg(sgl, sg, nents, i) + for_each_sg(sgl, sg, nents, i) { dma_direct_unmap_page(dev, sg->dma_address, sg_dma_len(sg), dir, attrs); + sg->dma_address = DMA_MAPPING_ERROR; + } } EXPORT_SYMBOL(dma_direct_unmap_sg); #endif -- 2.30.0.284.gd98b1dd5eaa7-goog