Re: Disabling IRQs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Although I am not intimately familiar with the x86 interrupt structure,
the following may clear up the discussion somewhat.  The (ASCII art)
diagram below gives a generic configuration.  A device has one or more
interrupt lines going out to an interrupt controller.  The interrupt
controller gathers interrupt inputs from a number of devices and has
a single interrupt output towards the CPU.  There are many variants in
that devices may have a single output that multiplexes multiple interrupt
conditions.  A device may have multiple interrupt outputs, there may be
multiple interrupt controllers that can route interrupts among each other and more than one interrupt line from the controller may be routed to a CPU.
(Recently I worked with a SoC that has 3 CPUs, 5 interrupt controllers,
each with two outputs and 70 interrupts that can be distributed over the
controllers...)

	+--------+           +------------+         +-----+
        |        |      ---->|            |         |     |
	| Device |---------->| Interrupt  |-------->| CPU |
	|        |      ---->| Controller |         |     |
        +--------+           |            |         +-----+
	                     +------------+

Basically there are two methods for interrupt signalling:

	1. Level sensitive
	The device asserts its interrupt output towards the interrupt
	controller and leaves it asserted.  The interrupt controller
	in turn asserts its interrupt output towards the CPU and also
	leaves it asserted.

	2. Edge sensitive
	The device creates a low->high or high->low transition on its
	interrupt output an the interrupt controller uses this transition
	to clock in the the interrupt.  It then asserts its interrupt
	output towards the CPU.  This effectively turns an edge sensitive
	interrupt into a level sensitive one.

If CPU interrupts are not disabled the CPU will be interrupted and the
interrupt handler will query the controller as to which interrupt is
active and will subsequently handle the interrupt in the device.  This
will usually result in taking the interrupt condition away.  In turn
removing the interrupt condition from the device will result in taking
it away from the controller and then from the CPU.  Not clearing the
interrupt will keep it active and result in yet another interrupt from
the device as soon as the interrupt handler finishes.

An important difference between level and edge sensitive interrupts is
usually in the clearing of the interrupt: with a level sensitive interrupt you only need to take away the condition in the device, while in the case
of an edge sensitive interrupt you need to take it away in the interrupt
controller.  This means that a drivers needs to know details about the
interrupt architecture next to knowledge about the device.

In the case of level sensitive interrupts, interrupts may of course get
lots if the device loses the interrupt condition. In that case the interrupt handler may go out to the device to examine the interrupt condition and find nothing. In practice this rarely happens, but it is still worth guarding against such a situation. The only real case where interrupts can be lost is
when GPIO (general purpose I/O) lines are directly tied to an interrupt
controller. In such a case an external signal may disappear before the handler
had a chance to react.

Interrupt priorities were left out in the above discussion. When you have multiple interrupt inputs the interrupt controller can "sort" them according to their relative priority. Usually the priority is indicated as a number between 0 and some upper value (15, 31, ...) If multiple interrupts signal at the same time, the interrupt handling code will first go out to handle the one with the highest priority, clear the interrupt condition and then handle
the one that then has the highest priority, etc.

When programming with interrupts it is usually best to do all the interrupt
handling for a device on the device itself and never touch the interrupt
controller. It is usually never necessary to touch the controller anyway. Also, the interrupt handler for a device is best implemented as a (bounded) loop that keeps handling interrupts from the device as long as there is any active interrupt condition. This avoids the overhead of repeatedly leaving and entering the handler as long as there are interrupts active. Imagine a serial port that has a transmit interrupt active and during the handling of
the transmit interrupt a character is received.

Also, writing interrupt handlers without a very good understanding of the underlying hardware is usually asking for (hard) trouble. It therefore pays to make a couple of diagrams that show how the hardware is organised and how
interrupts are handled.

I hope the above answered some of the questions raised.

For an additional question: I have not been able to find how the kernel assigns priorities other than that it is done statically (i.e. hard coded) during the
platform initialisation.  Is that correct?

Regards,
Hans Zuidam
--
Hans Zuidam
De Koppele 136, 5632 LD Eindhoven, The Netherlands
Tel. +31 40 2481546, Mob. +31 6 42345456
h.zuidam@xxxxxxxxxxxx



--
Kernelnewbies: Help each other learn about the Linux kernel.
Archive:       http://mail.nl.linux.org/kernelnewbies/
FAQ:           http://kernelnewbies.org/faq/


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux