What is a "phy"; lanes or a group?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kishon,

I wanted to confirm a few aspects about the PHY subsystem in Linux, and also the DT representation of PHYs. Mainly I'd like to understand whether an individual PHY represents a single "lane" of traffic, or the whole set of lanes related to a particular IO controller or "port" of an IO controller. This distinction isn't relevant for e.g. a USB2 controller that only uses a single (differential) lane, but is for something like a x4 PCIe port.

I think it's simplest if I describe some scenarios and confirm the set of phys you'd expect to exist in each:

1)

1 single PCIe controller, with 1 x1 port.

I believe the PCIe driver would get 1 PHY.

2)

1 single PCIe controller with 2 x1 root ports.

I believe the PCIe driver would get 2 separate PHYs; 1 PHY per root port.

3)

1 single PCIe controller with 1 x4 port.

Would you expect the PCIe driver to get a single PHY that represented the collection of 4 lanes, or to get 4 separate PHYs; one for each lane.

== PHY count for multi-lane controllers:

Perhaps the answer depends on the SoC architecture; I could image some SoCs having an entirely independent PHY per lane thus requiring the PCIe driver to get 4 PHYs, whereas another SoC might have a single set of registers that control all 4 lanes as a single block, which in turn was exposed as a single PHY.

However, I'd hope that the PCIe driver wouldn't have to understand those details, so it was possible to transplant the PCIe IP between different SoCs that used different PHY architectures without needing to understand the differences. Has any thought been given to this?

We could isolate the PCIe driver by either:

a) Always making the PCIe driver get a single PHY for each port irrespective of lane count. If the HW doesn't work like this, we could provide some kind of "aggregate" PHY that "fans out" to all the individual PHYs.

b) Always making the PCIe driver get as many PHYs as there are lanes. In the case where the PHY provider only implements a single PHY, we could either provide the same PHY for each lane, or provide dummy PHYs for all but one lane.

== PHY brick provider count

On Tegra, we have some PHY "bricks" that contain multiple lanes. Each lane can be configured to connect to one of n IO controllers (USB3, PCIe, SATA). The exact set of "n" IO controllers associated with each lane varies lane to lane. There are some global register settings to enable the "brick" as a whole, and some per lane settings to enable (and configure muxing for) each lane. We currently have the muxing modelled via the pinctrl DT bindings and Linux driver subsystem.

How many PHY object would you expect to exist in such a setup. Options I can think of are:

a)

A single PHY, since there is some global register state associated with the "brick". Multiple IO controller drivers will get this same PHY, and we'll implement reference counting to determine when to actually enable/disable this PHY. The driver will look at the muxing information to determine the actual use for each lane, and enable/disable/configure the per-lane options the first time the PHY is "got".

The disadvantage here is that per-lane information isn't implied by the set of PHY objects in use, since there's only one. This information could perhaps be provided by various custom properties in the PHY provider's DT node(s) though.

As background, we currently have this option implemented for Tegra124's PCIe brick (which supports 5 lanes).

Concretely, the provider might use #phy-cells = <0> here if there's just one brick implemented in the PHY HW module.

b)

A PHY per IO controller (port) that may use (parts of) the PHY brick. For example, even with just a 7-lane PHY brick, we might expose a PHY for each of 4 USB2 single-port controllers, one for each of 4 ports in the USB3 controller, 1 for the SATA controller, and one for each of 2 ports in the PCIe controller. (That's 11 PHY objects for 7 lanes).

The disadvantage here is that we potentially end up with (and certainly do on Tegra) with the PHY provider providing many more PHYs than lanes, if the number of IO controller (ports) that can be muxed into the PHY brick exceeds the number of lanes.

As background, I've seen a patch to extend Tegra124's PCIe PHY binding to essentially this model. However, the conversion looked like it supported a mix of model (a) and (b) for different cases, which feels inconsistent.

Concretely, the provider might use #phy-cells = <1>, with valid values being:

PHY_USB2_CONTROLLER0
PHY_USB2_CONTROLLER1
...
PHY_USB3_CONTROLLER0_PORT0
PHY_USB3_CONTROLLER0_PORT1
...
PHY_PCIE_CONTROLLER0_PORT0
PHY_PCIE_CONTROLLER0_PORT1
PHY_SATA

c)

A PHY per IO lane in the brick. In this case, the global PHY enabling would happen when the first PHY was enabled via reference counting, and all the per-lane registers would be controlled by each individual PHY object.

This option feels most generic, and gives the most lane-specific information to the PHY driver, I think?

The disadvantage here is that it's difficult for the PHY provider to know which lanes form part of the same logical connection, e.g. if we need to program each lane to enable it, then perform some link-level configuration across all lanes involved in that link.

Concretely, the provider might use #phy-cells = <1>, with valid values being:

PHY_LANE0
PHY_LANE1
PHY_LANE2
...

Again perhaps the answer differs between SoCs? If so, do you have any thoughts/guidance which option is most appropriate when?

Finally, do you have any generic thoughts re: structuring of PHY provider DT nodes? I've heard of a proposal for Tegra to have a top-level DT node for the HW module that implements the PHYs, but with a separate DT sub-node per PHY that's implemented in order to provide PHY-specific configuration. I see some of the Samsung bindings already do something like this. Do you have any general guidance on this, or do you think individual bindings are free to do whatever makes sense for the HW in question?

Thanks for reading this long email, and any responses you give!
--
To unsubscribe from this list: send the line "unsubscribe linux-tegra" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux