Re: PCI

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 09, 2007 at 01:40:37PM -0700, Kamal gupta wrote:
>  Hi
> 
>  Many thanks.
> 
>  On 7/9/07, Greg KH <greg@xxxxxxxxx> wrote:
> >
> > On Tue, Jul 03, 2007 at 03:17:35PM -0700, Kamal gupta wrote:
> > >  Hi
> > >
> > >  I got the answers to most of my questions except one thing. Would be
> > glad if
> > >  anyone can give some hint.
> > >
> > >  Here is what I understood (very briefly), please feel free to point out
> > if I
> > >  am wrong:
> >
> > What specifically are you looking to understand here?  PCI device and
> > bus initialization or the more "generic" driver core bus and driver
> > interaction?  They are two separate things.
> 
>  I am trying to understand how drivers got linked to their devices at
>  start-up and during hot plugging with an example of PCI.

Ok, I think you are getting things mixed up here.  Devices bind to
drivers by the bus function probe().  Don't get confused by PCI
resources and irqs and other things like that here.

> >  When system initializes, firmware scans for all the devices,
> >
> > Depends on the architecture.
> 
> 
>  but why it is said in LDD book "every PCI motherboard is equipped with PCI
>  aware firmware and at system boot up it performs configuration transaction
>  with every PCI peripheral" ? who else will do otherwise ? kernel ?

Yes, for some arches this does happen by the kernel itself.  For i386 it
happens by both the bios, acpi, and the kernel.  In short, it's a very
hard thing to do correctly and it is very fragile and breaks a lot :(

>  Specifically I am trying to understand who fills in the device structure,
>  since they must be filled before we register any driver.

Ok, that is a totally different question.

The struct device for a pci device is created by the pci core in
alloc_pci_dev which can be called by arch specific pci code, or by the
pci core, or by a pci hotplug driver if needed.  Then the caller fills
in the needed pci specific information.
different messy places (pci_create_bus() is one place for example.)

> > assigns address regions and soon,
> >
> > Again, depends on the architecture.
> >
> > > then comes the driver registration phase where driver matching takes
> > > place, which finally calls the probe function (which can reassign the
> > > address regions, if wants to),
> >
> > No, it can't reassign pci address regions here, they are already fixed.
> 
>  What do we mean by this then (taken from Understanding_networking_internal)
> 
>  Hardware initialization
> 
>  This is done by the device driver in cooperation with the generic bus layer
>  (e.g., PCI or USB). The driver, sometimes alone and sometimes with the help
>  of user-supplied parameters, configures such features of each device as the
>  IRQ and I/O address so that they can interact with the kernel.

Exactly, the driver handles this in the bus specific way needed.  Look
at the variety of PCI network drivers for examples of how they do this.

> > I think IRQ configuration takes place here only.
> >
> > No, that happens when you call pci_enable_irq().
> 
> 
> 
> 
> > Once matched, we are done.
> >
> > Done with what?
> 
> 
>  Matching of driver with device.

You skipped a few steps there, but sure, if you want to say so :)

> > Since it creates the id table in registration phase, these entries are
> > > stored in /lib/modules/KERNEL_NAME/modules.pcimap for user space hot
> > > plugging.
> >
> > No, no one uses those tables anymore, they are still there for backward
> > compatiblity with 2.4 kernels.  We should delete them soon.
> 
> 
>  where else is list maintained then ? How would one match the driver with
>  their device if system doesnt maintain any such list.

The drivers themselves maintain this list with the table that is defined
with the MODULE_DEVICE_TABLE() macro.  That places the needed
information into the module alias section, which is how modprobe does
the matching, as well as is the value passed to the bus specific
function that does the matching of driver to device.  That is in the bus
specific code.

> > When ever system detects the new device, kobject_hotplug invokes
> > > call_usermodehelper which invokes /sbin/hotplug with a input parameter
> > > PCI and environment variable describing the device. This /sbin/hotplug
> > > program then do the matching and we are done.
> >
> > Not really, all that callout does is call 'modprobe' with the module
> > alias given to it by the kernel, which then walks the list of module
> > aliases and loads all modules that match.  Then the kernel does the
> > matching of driver to device with the call to the individual driver
> > probe() functions until it finds a match.
> 
> 
>  So are you saying that /sbin/hotplug is never called ?

On modern distros, yes, it is never called.

> Isn't the list of module you are talking about is in
> /lib/modules/KERNEL_NAME/modules.pcimap

Yes, it is there, but as I stated, nothing uses it anymore.  It can be
safely deleted with no affect.

> >  My question is, I looked at the probe.c whose functions (scan slot and
> > scan
> > >  bus) are used by the hotplug directory. I think what is happening here
> > is,
> > >  each of the core files define enable slot function which configures the
> >
> > >  device and use the functions in probe file. Then there is top file,
> > >  pci_hotplug_core which calls this enable slot in power_write_file
> > function.
> > >  Can anyone please explain what is happening here ? and if hot-plugging
> > is
> > >  there via /sbin/hotplug ? why this ?
> >
> > Ok, I think you are getting PCI hotplug mixed up with the more generic
> > function of just loading and binding pci drivers to devices.  PCI
> > hotplug is to enable and disable PCI devices at runtime, if you have the
> > special hardware to allow this to happen.  These drivers control the
> > special hardware to do this.
> >
> > >  PS: Potentially, there is no device initialization phase for PCI
> > (unlike
> > >  net_dev_init defined in *net/core/dev.c*), since firmware BIOS already
> > did
> > >  that stuff. Am I right ?
> >
> > I do not understand the question.
> 
> 
>  Can you please give the sequence of steps (combine the overall picture)
>  stating how device got configured with their drivers at boot time and
>  hotplugging ? I am finding it difficult to combine the whole picture in to
>  one. Thanks in advance.

I thought I laid all of that out in the Linux Device Drivers book.  What
specifically in that chapter does not make sense?  I really don't want
to just repeat the same thing here.

Also note that there really is no difference from boot time and hotplug
time as far as the drivers or the driver core or the bus core cares.
It's all the same codepath, which makes it much easier to maintain and
use and debug.  If you write a driver, you will never have to worry
about boot vs. hotplug time at all.

thanks,

greg k-h

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux