Hi Arnd, On: 26/03/2014 11:14, Arnd wrote: > Subject: Re: [PATCH v5 6/9] ARM: shmobile: Add PCIe device tree nodes for R8A7790 > > On Wednesday 26 March 2014 11:01:46 Phil.Edworthy@xxxxxxxxxxx wrote: > > On: 26/03/2014 10:34, Arnd wrote: > > > Subject: Re: [PATCH v5 6/9] ARM: shmobile: Add PCIe device tree nodes > > for R8A7790 > > > > > > On Wednesday 26 March 2014 09:55:04 Phil.Edworthy@xxxxxxxxxxx wrote: > > > > Hi Arnd, > > > > > > > > On: 25/03/2014 18:42, Arnd wrote: > > > > > Subject: Re: [PATCH v5 6/9] ARM: shmobile: Add PCIe device tree > > nodes > > > > for R8A7790 > > > > > > > > > > On Tuesday 25 March 2014 16:56:41 Phil Edworthy wrote: > > > > > > + /* Map all possible DDR as inbound ranges */ > > > > > > + dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 > > 0 > > > > 0x80000000 > > > > > > + 0x43000000 1 0x80000000 1 0x80000000 > > 0 > > > > 0x80000000>; > > > > > > > > > > Typo: 0x43000000 should be 0x42000000 I guess. > > > > I used 0x43000000 as this is a 64-bit type. The OF PCI range code > > > > currently treats both 32 and 64-bit types the same way, but I thought > > it > > > > would be good to set this in case we ever need to use it. > > > > > > Ah, I forgot about the space identifier. It looks correct then, but > > > it seems a little strange to use a 32-bit identifier in one case > > > and a 64-bit one in the other. > > > > If the OF PCI range code allowed the PCIe host driver to determine if it's > > a 32-bit mapping, we could use that and get a small performance > > improvement with PCIe throughput. > > I don't think it's supposed to care. Soem of the upper bits of the ranges > only really make sense of PCI device registers, not for the top-level > ranges property. The driver can however still look at the address itself > to get that information. Ah, yes that is a possibility. > > > > Since the OF PCi range code treats both 32 and 64-bit types the same > > way, > > > > my PCIe driver only creates 64-bit mappings. In addition, the PCIe > > > > controller has to use a 64-bit mapping for anything over 2GiB. Based > > on > > > > this, I think it's sensible to leave the mappings as 1-to-1. > > > > > > I'm not following, sorry. What is the hardware requirement in the > > > controller? > > With this controller, you can only specify maps whose size are a power of > > two, and the size must be less than or equal to the cpu address alignment. > > Further, when the size is 4GiB, you have to use a 64-bit mapping. Thinking > > about it, the 4GiB case is not relevant to our discussion about 32-bit vs > > 64-bit mappings. > > But the ranges you specified in the property don't actually fit in those > constraints: you have a range with size 0x8000000 and start 0x40000000, > which you say can't be programmed into the hardware. Actually, the driver checks the dma-ranges against these constraints, and if necessary will create multiple mappings to fulfil the requested dma-ranges. > > Still, my comment about the OF PCI range code treating both 32 and 64-bit > > types the same way means that PCIe host driver has to assume it's a 64-bit > > mapping. > > I was thinking more of PCI devices than the host itself. If the host > driver can verify that all mappings are in the first 4GB and cover all of > RAM, we won't have to use an swiotlb for devices that don't support 64-bit > DMA, which is a very significant performance difference. Ok, I think I understand. However, all the other PCI host drivers just do 1-to-1 mapping between PCI and CPU addresses, right? Whilst it might be nice be able to support mapping CPU addresses > 4GiB to PCI addresses under 4GiB, can that be something to consider later on? Thanks Phil -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html