Hi, On Wed, Aug 24, 2011 at 10:49:59AM -0400, Alan Stern wrote: > On Wed, 24 Aug 2011, Felipe Balbi wrote: > > > Hi, > > > > On Tue, Aug 23, 2011 at 05:33:20PM -0400, Alan Stern wrote: > > > On Tue, 23 Aug 2011, Felipe Balbi wrote: > > > > > > > > Okay. But consider this case for a moment. Merely because the OMAP > > > > > implementation requires a bridge device between the PCI and USB layers, > > > > > that doesn't mean the Intel implementation should be forced to use one. > > > > > > > > Alan, my whole point is that this is hardly an OMAP-only thing. Just > > > > look into the many different ARM SoCs we have. > > > > > > All right, try this instead: Merely because OMAP and a bunch of other > > > SoC architectures require a bridge device between the PCI and USB > > > layers, that doesn't mean the Intel implementation should be forced to > > > use one. > > > > that doesn't mean either that Intel couldn't license the same IP the ARM > > SoCs are licensing. > > Of course. And when they do, maybe adding the glue layer will be > appropriate. Until then, it isn't. Are you sure they aren't already ? > > > > so there is a block which handles the BUS interconnection logic. Thet > > > > CPU_interface_block is decoding the bus protocol. No matter if it's PCI, > > > > AXI, OCP, AHB, or whatever else, you will have some entity handly the > > > > integration with the CPU/SoC/Bus. > > > > > > So what? Sure, every PCI device has such an entity. Does that mean > > > every pci_device structure needs to have a platform_device child? > > > > why not ? Then we split the Integration logic (PCI, OMAP, FreeScale, > > Marvel, etc) from the IP driver. > > Good luck trying to sell that idea to the PCI maintainers. They'll > laugh at you. that's because they have no embedded experience and have no clue about the challenges we face when we have to re-use a PCI- or x86-centric piece of code such as the USB Host stack. > > > At this point it's not clear how much code Sebastian really ended up > > > sharing. I've got a feeling it wasn't very much. And in the process, > > > he made a mess of hcd-pci.c. Separating the files could easily end up > > > being better. > > > > > > Besides, I'm talking about adding a single file that would handle _all_ > > > the platform devices. Not a separate file for each architecture or > > > platform. > > > > And I'm talking about not adding a new file at all. After all *hci-hcd > > are converted, the only bus needing a bridge would PCI (sorry); all > > others could pass correct resources via devicetree and instantiate > > xhci-core directly. > > In the end, this comes down to a tradeoff. Do we implement a fake > "glue" platform device that has no real meaning in order to simplify > some drivers by removing their need to support the PCI bus as well as > the platform bus? Or do we keep the device model data structures > accurate, but complicate the drivers? > > I can't help thinking that other subsystems have solved the same > problem, and it might be a good idea to do what they do. The example > I'm most familiar with is SCSI; it supports host adapters of any type. > Following the SCSI model would mean _always_ sticking a new device > between the controller (whether PCI or platform) and the root hub. > The new device would belong to the USB bus_type, not the platform bus. Then let's add that. Bus as of today the usb_bus_type is for USB device drivers (a USB camera, or USB Storage, etc). You would need to add some extra bus_type for USB Host Controllers and why would you add a whole new bus when there's already the platform bus ready to use and very well tested ? -- balbi
Attachment:
signature.asc
Description: Digital signature