On Wed, Dec 12, 2012 at 4:16 PM, Thomas Petazzoni <thomas.petazzoni@xxxxxxxxxxxxxxxxxx> wrote: > Dear Rob Herring, > > On Mon, 10 Dec 2012 17:24:44 -0600, Rob Herring wrote: > >> > Marvell SoCs have up to 20 configurable address windows, which allow >> > you, at run time, to say "I would like the range from physical >> > address 0xYYYYYYYY to 0xZZZZZZZZ to correspond to the PCIe device >> > in port 1, lane 2, or to the NAND, or to this or that device". >> > Therefore, in the PCIe driver I proposed for the Armada 370/XP SoCs >> > [1], there is no need to encode all those ranges statically in the >> > DT. >> >> That's not a unique feature. I'm not sure if any powerpc systems do >> that though. > > Yes, probably not an unique feature. > >> > The only "ranges" property I'm using is to allow the DT sub-nodes >> > describing each PCIe port/lane to access the CPU registers that >> > allow to see if the PCIe link is up or down, access the PCI >> > configuration space and so on. So all ranges in my "ranges" >> > property correspond to normal CPU registers, like the one you would >> > put in the "reg" property for any device. The fact that those >> > devices are PCIe is really orthogonal here. >> >> That doesn't really sound right. > > Very likely, but I still don't get what is "the right way". Hi Thomas, I just went and looked at your binding. Here's the snippet I found interesting: pcie-controller { + compatible = "marvell,armada-370-xp-pcie"; + status = "disabled"; + #address-cells = <1>; + #size-cells = <1>; + ranges = <0 0xd0040000 0x2000 /* port0x1_port0 */ + 0x2000 0xd0042000 0x2000 /* port2x1_port0 */ + 0x4000 0xd0044000 0x2000 /* port0x1_port1 */ + 0x8000 0xd0048000 0x2000 /* port0x1_port2 */ + 0xC000 0xd004C000 0x2000 /* port0x1_port3 */ + 0x10000 0xd0080000 0x2000 /* port1x1_port0 */ + 0x12000 0xd0082000 0x2000 /* port3x1_port0 */ + 0x14000 0xd0084000 0x2000 /* port1x1_port1 */ + 0x18000 0xd0088000 0x2000 /* port1x1_port2 */ + 0x1C000 0xd008C000 0x2000 /* port1x1_port3 */>; + + pcie0.0 at 0xd0040000 { + reg = <0x0 0x2000>; + interrupts = <58>; + clocks = <&gateclk 5>; + marvell,pcie-port = <0>; + marvell,pcie-lane = <0>; + status = "disabled"; + }; + + pcie0.1 at 0xd0044000 { + reg = <0x4000 0x2000>; + interrupts = <59>; + clocks = <&gateclk 5>; + marvell,pcie-port = <0>; + marvell,pcie-lane = <1>; + status = "disabled"; + }; [... rest trimmed for berevity] You're right, if you're doing dynamic allocation of windows, then you really don't need to have a ranges property. However, the way the PCI node is set up definitely looks incorrect. PCI already has a very specific binding for pci host controller nodes. First, #address-cells=<3>; #size-cells=<2>; and device_type="pcie" must be there. You don't want to break this. You can find details on the pci and pci-express binding here: http://www.openfirmware.org/1275/bindings/pci/pci2_1.pdf http://www.openfirmware.org/1275/bindings/pci/pci-express.txt For the child nodes, PCI is a discoverable bus, so normally I wouldn't expect to see child nodes at all when using a dtb. The only time nodes should be populated is when a device has non-discoverable charactersitics. In your example above you do have some additional data, but I don't know enough about pci-express to say how best to encode them or whether they are needed at all. Ben might have some thoughts on this. When the PCI child nodes are present, it is important to stick with the established PCI addressing scheme which uses 3 cells for addressing. The first entry in the reg property must represent the configuration space so that DT nodes can be matched up with discovered devices. There is no requirement to include mappings for the memory and io regions if the host controller can assign them dynamically. I don't think you should need a ranges property at all for what you're doing. Access to config space is generally managed by the PCI host controller drivers and subsystem, and PCI device drivers don't typically use of_ calls directly. g. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html