* Chris Wright (chrisw@xxxxxxxxxxxx) wrote: > > Mike Travis wrote: > > > Region 1: Memory at f8200000000 (64-bit, prefetchable) [size=256M] > > > Region 3: Memory at 90000000 (64-bit, non-prefetchable) [size=32M] > > > > > > So this 44bit MMIO address 0xf8200000000 ends up in the rbtree. As DMA > > > maps get added and deleted from the rbtree we can end up getting a cached > > > entry to this 0xf8200000000 entry... this is what results in the code > > > handing out the invalid DMA map of 0xf81fffff000: > > > > > > [ 0xf8200000000-1 >> PAGE_SIZE << PAGE_SIZE ] > > > > > > The IOVA code needs to better honor the "limit_pfn" when allocating > > > these maps. > > This means we could get the MMIO address range (it's no longer reserved). > It seems to me the DMA transaction would then become a peer to peer > transaction if ACS is not enabled, which could show up as random register > write in that GPUs 256M BAR (i.e. broken). > > The iova allocation should not hand out an address bigger than the > dma_mask. What is the device's dma_mask? Ah, looks like this is a bad interaction with the way the cached entry is handled. I think the iova lookup should skip down the the limit_pfn rather than assume that rb_last's pfn_lo/hi is ok just because it's in the tree. Because you'll never hit the limit_pfn == 32bit_pfn case, it just goes straight to rb_last in __get_cached_rbnode. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html