I am writing an open source driver for a multifunction PCI card and can't seem to find an answer to a problem through searching or digging through kernel code. So I'm posting here for help. My apologies if RTFM type replies are warranted. This PCI card contains several hardware cores all connected through a local bus behind a PCI bridge. The bridge offers 7 partitions inside of BAR 0 that can be panned around the local bus address space by setting registers in a 8th fixed address partition. I also has 32 incoming IRQ lines - many of which are wired from IRQ generation lines on the various cores. It has 32 bit interrupt mask, status, and clear registers that control cascade of those 32 incoming IRQs to PCI INT-A. My problem is that several of the cores already have driver support in Linux; e.g. NS16550 UARTs, some FPGA cores from opencores.org, etc. I've fixed one of the windows over the register spaces for most of the ubiquitous blocks and remapped the PCI I/O space into into the kernel vmem. If I pass the PCI INT-A IRQ number to the funtions that create a new UART instance, for example, things work, but the UART interrupt service code gets called for each IRQ coming off the card - which may be 1000:1 more often from other parts of the board than the slow UART. This seems a bit inefficient considering I have the ability to mask and unmask specific IRQs on the PCI bridge coming from each component. What I'm really looking for is the gold standard way of cascading the PCI INT-A coming from the card to n number of new interrupts. As well as the prefered way of allocating next available interrupt number(s). I've found two examples of how this can be done in the kernel - the MIPS core interrupt subsystem and the PCI MSI driver. But neither do things the same way. The MIPS interrupt code basically reserves the first 8 interrupt numbers for the hardware IRQ lines and select software exceptions. Then whatever MIPS-centric board implementation takes over and manually enumerates however many cascaded interrupt sources it needs from 8 onward. This fixed mapping doesn't port well to other architectures - especially x86 with multiple apics. The PCI MSI driver - which seems to be x86 specific only - allocates cascade interrupts by searching the interrupt vector table for entries that appear to be unused, but it does it through a means that is not well documented and I suspect may cause portability and maintenance issues in the future. And unfortunately those are the only good references I can find. Does anyone have any opinions on how this should be handled? Or could point be toward additional documentation or code I can further study. Thanks, Alan Hightower -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html