Fix the calculation of the end address when flushing iotlb entries to ram. This bug has been a cause of esp dma errors, and it affects HyperSPARC systems much worse than SuperSPARC systems. Signed-off-by: Bob Breuer <breuerr@xxxxxx> --- Just in case it's not obvious from the patch as to how the dma was broken: Each dma mapping sets up iopte entries for the iommu, and the iommu only looks in main memory for the iopte entries. If there is a group of iopte entries that are smaller than a page in size, but straddle a page boundary, the broken code would fail to flush the last page to ram. Bob
diff --git a/arch/sparc/mm/iommu.c b/arch/sparc/mm/iommu.c index 77840c8..7215849 100644 --- a/arch/sparc/mm/iommu.c +++ b/arch/sparc/mm/iommu.c @@ -144,8 +144,9 @@ static void iommu_flush_iotlb(iopte_t *i unsigned long start; unsigned long end; - start = (unsigned long)iopte & PAGE_MASK; + start = (unsigned long)iopte; end = PAGE_ALIGN(start + niopte*sizeof(iopte_t)); + start &= PAGE_MASK; if (viking_mxcc_present) { while(start < end) { viking_mxcc_flush_page(start);