Hi, On Wednesday 21 October 2015 11:10 PM, Stephen Warren wrote: > On 10/21/2015 06:15 AM, Thierry Reding wrote: >> On Mon, Oct 19, 2015 at 05:30:42PM -0600, Stephen Warren wrote: >>> From: Stephen Warren <swarren@xxxxxxxxxx> >>> >>> Convert the binding to provide a PHY per lane, rather than a PHY per >>> "pad" block in the hardware. This will allow the driver to easily know >>> which lanes are used by clients, and thus only enable those lanes, and >>> generally better aligns with the fact the hardware has configuration per >>> lane rather than solely configuration per "pad" block. >>> >>> Add entries to pinctrl-tegra-xusb.h to enumerate all "pad" blocks on >>> Tegra210, which will allow an XUSB DT node to reference the PHYs it >>> needs. >>> >>> Add an nvidia,ss-port-map register to allow configuration of the >>> XUSB_PADCTL_SS_PORT_MAP register. > >> According to Kishon's latest recommendation, the padctl binding should >> probably look more like this: >> >> padctl@0,7009f000 { >> ... >> >> phys { >> pcie { >> /* 5 subnodes on Tegra124, 7 on Tegra210 */ >> pcie-0 { >> ... >> }; >> >> ... >> }; > > I noticed that he mentioned a separate node per PHY brick or PHY. > > That seems like an odd requirement, or even recommendation, since the > PHY bindings, like (almost?) all other DT provider/consumer bindings, > use a phandle+specifier to indicate which resource is being provided. As A lot of that was added before PHY core was better able to handle multi phy PHY provider. Using phandle+specifier makes the driver code do lot of stuff just to find the PHY which was unnecessary. This can be avoided just by modeling the dt node properly and using the correct phandle in the controller dt node. > such, there's no absolute need to represent objects as DT nodes, > although there may be other good arguments for doing so. yeah, that would be a better representation of the hw and avoid lot of useless stuff in the driver. Thanks Kishon -- To unsubscribe from this list: send the line "unsubscribe linux-tegra" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html