The main reason I took this approach is just because I thought it was cleaner from the perspective of the busses which are totally independent (except for the fact that they share a single hardware interrupt). I did not make any measurements, so I doubt that I have anything to add that you don't already know. I saw other usages of chained interrupts that do the same thing (scan a "status" register and use them to make software interrupts) and I thought that is basically what the dummy irq chip code is for. The only thing I thought I was doing that was novel was actually breaking out the dummy irqchip into its own driver; it is not my idea, but I do think makes it a lot cleaner. Nevertheless, it should be cheap in terms of number of instructions; the most expensive part looks like looking up the mapping. In any case, I think the low hanging fruit here is supporting buffering or DMA, like Ben suggested. To address the comment on being over engineered: outside of the init function (which would exist regardless of how we do this, if not here then in the I2C driver); the code is actually pretty small and generic. All that being said, it would not be very hard to do this without using the dummy irqchip code and it would definitely be smaller in terms of indirection and space used, but I think the code would actually be more complicated to read. We would be going back to having an I2C controller along with the I2C busses; where all the I2C controller does is read the IRQ register and then call the appropriate bus irq handler, which looks a lot like a dummy irqchip. Cheers -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html