Re: [PATCH v2 0/3] ARM: PCI: implement generic PCI host controller

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 14, 2014 at 12:05:27PM +0100, Arnd Bergmann wrote:

> >  2) The space in the IO fixed mapping needs to be allocated to PCI
> >     host drivers dynamically
> >     * pci_ioremap_io_dynamic that takes a bus address + cpu_physical
> >       address and returns a Linux virtual address.
> >       The first caller can get a nice traslation where bus address ==
> >       Linux virtual address, everyone after can get best efforts.
> 
> I think we can have a helper that everything we need to do
> with the I/O space:
> 
> * parse the ranges property
> * pick an appropriate virtual address window
> * ioremap the physical window there
> * compute the io_offset
> * pick a name for the resource
> * request the io resource
> * register the pci_host_bridge_window

Sounds good to me

> > You will have overlapping physical IO bus addresses - each domain will
> > have a 0 IO BAR - but those will have distinct CPU physical addresses
> > and can then be uniquely mapped into the IO mapping. So at the struct
> > resource level the two domains have disjoint IO addresses, but each
> > domain uses a different IO offset..
> 
> This would be the common case, but when we have a generic helper function,
> it's actually not that are to handle a couple of variations of that,
> which we may see in the field and can easily be described with the
> existing binding.

I agree the DT binding ranges has enough flexibility to describe all
of these cases, but I kind of circle back to the domain discussion and
ask 'Why?'. 

As far as I can see there are two reasonable ways to handle IO space:
 - The IO space is 1:1 mapped to the Physical CPU Address. In most
   cases this would require 32 bit IO BARS in all devices.
 - The IO space in a domain is always 0 -> 64k and thus only ever
   requires 16 bit BARs

And this is possible too:
 - The IO space is 1:1: mapped to Linux Virtual IO port numbers
   (which are a fiction) and devices sometimes require 32 bit
   IO BARs. This gives you lspci output that matches dmesg and
   /proc/ioport.

Things get more complex if you want to support legacy non-BAR IO (eg
VGA). Then you *really* want every domain to support 0->64k and you
need driver support to setup a window for the legacy IO port. (eg on a
multi-port root complex there is non-PCI spec hardware that routes the
VGA addresses to the root port bridge that connects to the VGA card)
Plus you probably need a memory hole around 1M..

But, I think this is overthinking things. IO space really is
deprecated, and 0->64k is a fine default for everything but very
specialized cases.

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux