Hi, On Tue, Jun 14, 2011 at 02:32:30PM -0700, Sarah Sharp wrote: > On Tue, Jun 14, 2011 at 03:39:30PM +0300, Felipe Balbi wrote: > > Hi Sarah, > > > > following we have two patches for XHCI. The first > > one can be applied now and it's just a Kconfig change. > > > > The second one, I wanted to discuss with you in more > > detail. It's adding a platform_device to xhci.c. I > > want to do that because it will us to add more xhci > > glue/link/name-it-as-you-wish layers without having > > to touch xhci.c and will avoid the ifdeffery mess > > we have on EHCI/OHCI modules. > > I'm not familiar with the platform device code, but I would like to > avoid excessive ifdefferies. :) good :-) > > The idea is that we will always have (at least I > > think) memory mapped controllers, > > I'm pretty sure the xHCI spec requires memory mapped controllers. I > think it may also require interrupts, although I suppose a driver could > poll the event ring if interrupts aren't possible. Well, we can handle interrupt/polling by a feature flag. See more below. > > so it doesn't > > really matter if it's PCI or integrated in a SoC > > environment; as long as we long we setup the > > IP integration context, the core driver shouldn't > > really care where it's running. > > Do you think platform xHCI drivers in an SOC will need to careful how > much memory they allocate for xHCI structures? For example, the ring > segments could be smaller if necessary, as long as all the segments are > the same size. I don't think they'll need to be too careful about memory allocation, no. But the thing is that generally, you will an IP core which probably talks AMBA, then you have a PCI or OMAP or FreeScale or STE bridge around that core. So, it could very well be that the IP core on OMAP ends up been the same one as in STE. While in this case the IP core itself doesn't matter much as they're all xHCI. Anyway, I'm trying to avoid lots of rework when adding support to more devices to xHCI driver. The same could be done for EHCI/OHCI/UHCI too. > > If you need a full > > implementation where I'm using this, you see my > > (still out of tree) dwc3 driver which is the driver > > for the OMAP5 USB3 controller. I have the same IP > > core for OMAP5 and on a PCIe pre-silicon environment > > for development purposes. The PCI glue layer you > > can find at [1], OMAP5 glue layer at [2] and core > > driver at [3]. > > > > Anyway, here are the details: > > > > xhci-pci.c allocates a platform_device for xhci.c > > and sets itself as parent of that device. We also > > need to set dma_mask, coherent_dma_mask and dma_parms > > to the same as the parent device. Like so: > > > > pci_set_drvdata(pci, glue); > > > > dma_set_coherent_mask(&xhci->dev, pci->dev.coherent_dma_mask); > > > > xhci->dev.dma_mask = pci->dev.dma_mask; > > xhci->dev.dma_parms = pci->dev.dma_parms; > > xhci->dev.parent = &pci->dev; > > glue->dev = &pci->dev; > > glue->xhci = dwc3; > > > > You also fetch the pci_resources only to struct > > resources for the xhci platform_device, like so: > > > > res[0].start = pci_resource_start(pci, 0); > > res[0].end = pci_resource_end(pci, 0); > > res[0].name = "memory"; > > res[0].flags = IORESOURCE_MEM; > > > > res[1].start = pci->irq; > > res[1].name = "irq"; > > res[1].flags = IORESOURCE_IRQ; > > How does this interact with the MSI/MSI-X setup in xhci.c? If we end up > not using the legacy PCI interrupt, how does that interact with the > platform device code? I don't know much (actually anything :-) about MSI/MSI-X, but I believe we could have two sets of handlers and use a feature flag on the driver_data field to figure out what to use. Something like: xhci_probe() { ... if (features & XHCI_HAS_MSI) use_msi_irqs(); else if (irq) request_irq(); ... } > > It will also avoid us having to provide basically > > the same struct hc_driver every time. > > > > If we need to differentiate a few of the glue > > layers and act differently based on a few small > > details, we can use the driver_data field of > > struct platform_device_id: > > > > static const struct platform_device_id xhci_id_table[] __devinitconst = { > > { > > .name = "xhci-omap", > > .driver_data = &xhci_omap_data, > > { > > .name = "xhci-pci", > > .driver_data = &xhci_pci_data, > > }, > > { }, /* Terminating Entry */ > > }; > > > > But I doubt we will need something like that > > any time soon. > > > > What do you think of this idea ? Do you see any > > terrible drawbacks which I didn't see ? We have > > two drivers using this (MUSB and DWC3) and it > > has been quite good solution thus far. > > I'll admit that I don't really know how MUSB works. So there's an MUSB > platform device that can be both an xHCI host controller and a USB3 > device controller? MUSB is Dual-Role Device USB2 OTG controller. So it has a peripheral controller and a non-standard host. The controller coming in OMAP5 is a DRD USB3 OTG controller. So it has peripheral controller and (finally standard, thanks Synopsys :-) xHCI block. That's why I'm concerned. Besides, soon enough, many SoCs will start popping with xHCI IPs on them, so I figured it'd be better to fix things up now when there's only one user. > > udev > > will load the correct modules anyway based on > > device creation. > > Do you mean udev, the userspace program? And if so, does that mean we > have to wait for userspace bring up before the xHCI driver probes a > device? (A little confused here.) How would that interact with booting > off of a USB device? I need think a bit about that, but udev creates device based on device creation. So, when the PCI bus is probed, it would create the xhci-pci device, then udev loads xhci-pci.ko, which in turn creates xhci pdev, then udev loads xhci.ko. Case we're booting off of a USB device, either you make that built-in or you'll already have a initramfs. -- balbi
Attachment:
signature.asc
Description: Digital signature