Hi, On Tue, Aug 23, 2011 at 02:43:52PM -0400, Alan Stern wrote: > On Tue, 23 Aug 2011, Felipe Balbi wrote: > > > > > I wouldn't call it fake. You _do_ have an IP core with a PCIe bridge > > > > around and that's what the patch tries to model. > > > > > > I don't understand. The IP core with the PCIe bridge already > > > corresponds to a pci_device, doesn't it? So then what does the new > > > > not really. Take for example the DesignWare USB3 IP driver which I just > > sent. The exact same IP can be configured (at RT level) as host only, > > DRD and device-only. > > You'll have to forgive me for not being familiar with all the jargon. > I don't remember what DRD is -- dual-role device? Is that like OTG? DRD == dual-role device. It's an OTG with a broken arm or something. OTG can handle HNP and automatic role selection a DRD would need SW intervention or a cable change for role selection. > > The host side is XHCI. For the purpose of $SUBJECT let's talk about a > > host-only configuration, so here it goes: > > > > That particular IP talks AXI. That's how the verilog code was written, > > What's AXI? (Or does it not really matter?) AXI is one of the AMBA buses from ARM. And yes, it does matter. The IP itself doesn't know a single thing about PCI. That's my point. > > still we have TUSB7340 which is a host-only configuration of that IP > > (so an XHCI controller) on a PCIe card. Don't you agree that our HW > > engineers had to create an AXI-to-PCIe bridge ? Don't you aggree those > > are two different devices (on the same die) ? > > I have no idea what your engineers had to create. As for whether they > are two different devices, that's somewhat in the eye of the beholder. > Does the kernel need to program or configure the PCIe-to-AXI bridge > device in any way? If not, then as far as the kernel is concerned, > that device may as well not exist. you might be right here. > > > Are you talking about something like the traditional arrangement > > > whereby several controllers are bundled together as separate PCI > > > functions in the same PCI slot? The way Intel boards have UHCI > > > > unfortunately not :-( > > > > > controllers at 00:1d.0, 00:1d.1, and 00:1d.2 plus an EHCI controller at > > > 00:1d.7? Each of those PCI functions corresponds to a separate > > > pci_device structure, which is the parent of the controller's root hub > > > device. There's no need for an intervening platform device. > > > > see above. > > Okay. But consider this case for a moment. Merely because the OMAP > implementation requires a bridge device between the PCI and USB layers, > that doesn't mean the Intel implementation should be forced to use one. Alan, my whole point is that this is hardly an OMAP-only thing. Just look into the many different ARM SoCs we have. > As I understand it, Sebastian's patch set would have created a fake > platform-bus device even in the Intel case, not corresponding to any > real hardware. > > Was my understanding wrong? Because the XHCI is a separate entity. The way things go at the level is somewhat below (in a very bad pseudo-code): module xhci_hcd ( <list of inputs>, <list of outputs>, <CPU-related signals>, <IRQ line signal>, <DMA related signals>, <etc etc etc>); CPU_interface_block(<assign inputs and outputs correctly>); real_usb_logic(<assign inputs and outputs correctly>); so there is a block which handles the BUS interconnection logic. Thet CPU_interface_block is decoding the bus protocol. No matter if it's PCI, AXI, OCP, AHB, or whatever else, you will have some entity handly the integration with the CPU/SoC/Bus. > > > I'm well aware of all that. I'm asking if there isn't a different way > > > to accomplish the same result. > > > > I can't see any, if you come up with a better approach, I'm willing to > > consider :-) > > That's what I was trying to present. Namely, write an xhci-core > module, with routines that can be invoked by various driver modules > such as xhci-pci or xhci-omap, and do it in such a way that these > routines take a struct device as an argument -- and are agnostic as to > whether that struct device is embedded in a struct pci_device or a > struct platform_device. Then what happens when you need two different devices (let's PCIe and platform_device) on the same board, working at the same time ? Will you build two modules ? The way we proposed will have driver-core allocate two instances of the xhci-hcd automatically, because two separate platform_devices will be added. > Maybe Sebastian's work can be changed to work like this, perhaps even > without the need for a separate xhci-core module. That would be fine > with me. All I really object to is creating a platform_device > structure in cases where it doesn't correspond to any real hardware. but the 'real hardware' exists. Even though we don't need to configure it. Same happens with nop-xceiv driver. It's a nop-xceiv because it works out of the box, all you have to do is provide a voltage rail and ground to it. Nevertheless, the driver is there and it is in use by some boards. > > > > I don't think so. I think your approach is inverted. Why should a PCI > > > > glue/bridge call into XHCI ? > > > > > > That's like asking why ehci-hcd should call into usbcore. It does so > > > > no, it's not. > > > > > because hcd-pci.c provides functions that it can use. > > > > There's nothing PCI provides that XHCI needs to know about. I can't see > > the need to invert things. > > You read my reply backward. A PCI glue/bridge would call into XHCI > because XHCI provides things that the glue/bridge can use -- namely, an > xHCI controller driver. Not the other way around. > > Besides, PCI _does_ provide things XHCI needs to use. Things like > memory-mapped IO addresses, interrupt numbers, and other resources. memory-mapped IO, interrupt numbers and other resources isn't a PCI only thing. And if you just platform_add_resources(res, ARRAY_SIZE(res)) the platform_device will be able to use the same resources without having to know about PCI. > > > It can just as easily be passed by a PCI device driver with no need for > > > a platform-bus glue layer. > > > > For a PCI-only context, of course. But when we need to add PCI-less SoC > > devices, it starts to get funny and we will end up with ehci-hcd.c all > > over again. > > That's okay. There's nothing wrong with adding glue layers when they > are appropriate. But they shouldn't be added all the time if they are > sometimes inappropriate. > > > Finally, as far as the $SUBJECT patch is concerned, it looks like > Sebastian was trying to avoid creating an hcd-platform.c source file by > sharing the code in hcd-pci.c. Looking at it more closely, it appears > to contain some layering violations. For example, consider where > hcd_plat_pci_suspend() calls _suspend_common(). The first argument it > passes, dev, points to the platform device whereas it's expected to > point to the pci_device. > > I suspect the code sharing wasn't done correctly. Creating a new file > instead might well work out better. The amount of code duplication > probably wouldn't be all that large. Fine, then let's duplicate. I just thought the whole point was to share code where possible to avoid duplicating. The driver tree has been adding code at a very fast rate and sooner or later we will be the next target wrt to code sharing. -- balbi
Attachment:
signature.asc
Description: Digital signature