From: Bjorn Helgaas <bhelgaas@xxxxxxxxxx> Date: Fri, 3 Apr 2015 10:45:26 -0500 > On Sun, Mar 29, 2015 at 11:32:50AM -0700, David Miller wrote: >> From: Bjorn Helgaas <bjorn.helgaas@xxxxxxxxx> >> Date: Sun, 29 Mar 2015 08:30:40 -0500 >> >> > Help me understand the sparc64 situation: are you saying that BAR >> > addresses, i.e., MMIO transactions from a CPU or a peer-to-peer DMA can be >> > 64 bits, but a DMA to main memory can only be 32 bits? >> > >> > I assume this would work if we made dma_addr_t 64 bits on sparc64. What >> > would be the cost of doing that? >> >> The cost is 4 extra bytes in every datastructure, kernel wide, that >> stores DMA addresses. > > That much is fairly obvious. What I don't know is how much difference this > makes in the end. Networking drivers, and perhaps block drivers too, have a data structure for each entry in the send/receive rings for the device and these can be huge. And each ring entry stores one or more DMA addresses. Larger types mean more memory, but also more capacity cache misses in critical code paths. I'm really sorry if this isn't painfully obvious to you. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html