Vaibhav Hiremath <hvaibhav@xxxxxx> writes: > With addition to TI81XX, AM33XX family of devices, the number > of interrupts supported has increased to 128, compared to 96. > The current implementation for irq handling is hardcoded to use > 96 interrupts (with 3 register-sets to handle), this patch cleanups > the code, to increase maximum number of interrupts support > to 128, with dynamic detection of no of registers required for > handling all interrupts. > > > Signed-off-by: Vaibhav Hiremath <hvaibhav@xxxxxx> > Signed-off-by: Afzal Mohammed <afzal@xxxxxx> > Cc: Tony Lindgren <tony@xxxxxxxxxxx> > Cc: Kevin Hilman <khilman@xxxxxx> > Cc: Paul Walmsley <paul@xxxxxxxxx> > --- > Ideally, we should use dynamic allocation to allocate memory > for registers/arrays, Yes. > may be too much cleanup for this patch, There is no such thing as too much cleanup. ;) And the INTC is in need of it, IMO. > so as of now restricting to minimal changes to fit devices > like, am33xx/ti81xx. Then someone else will have to do the cleanup later. It would be greatly appreciated if you could do the necessary cleanup in order to cleanly add support for more SoCs. Yes, we probably should've insisted when support for TI81xx was added, but that one slipped in under the radar. For starters, the notion of a banks this code is a rather messed up and needs a cleanup. A bank is supposed to be a group of 32 interrupts, and the INTC is made up of 3 (or 4) banks. However, the current code creates a single "bank" of 96 (or 128) interrupts. It also confuses what registers are part of the bank and what are global to the INTC. This confusion is both in init and in context save/restore. IMO, to clean this up, first the notion of banks needs to be fixed in that code there is a distinction between what acts on banks and what works on the whole INTC. Then, the init/alloc should be done dynamically based on the number of interrupts passed to omap_init_irq() Kevin -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html