4.7-stable review patch. If anyone has any objections, please let me know. ------------------ From: Will Deacon <will.deacon@xxxxxxx> commit 7c6d90e2bb1a98b86d73b9e8ab4d97ed5507e37c upstream. The implementation of iova_to_phys for the long-descriptor ARM io-pgtable code always masks with the granule size when inserting the low virtual address bits into the physical address determined from the page tables. In cases where the leaf entry is found before the final level of table (i.e. due to a block mapping), this results in rounding down to the bottom page of the block mapping. Consequently, the physical address range batching in the vfio_unmap_unpin is defeated and we end up taking the long way home. This patch fixes the problem by masking the virtual address with the appropriate mask for the level at which the leaf descriptor is located. The short-descriptor code already gets this right, so no change is needed there. Reported-by: Robin Murphy <robin.murphy@xxxxxxx> Tested-by: Robin Murphy <robin.murphy@xxxxxxx> Signed-off-by: Will Deacon <will.deacon@xxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/iommu/io-pgtable-arm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -576,7 +576,7 @@ static phys_addr_t arm_lpae_iova_to_phys return 0; found_translation: - iova &= (ARM_LPAE_GRANULE(data) - 1); + iova &= (ARM_LPAE_BLOCK_SIZE(lvl, data) - 1); return ((phys_addr_t)iopte_to_pfn(pte,data) << data->pg_shift) | iova; } -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html