On Thu, Jun 02, 2016 at 05:44:25PM -0500, Rob Herring wrote: > > > > In theory it would even be possible to just require a DT node per > > > > cpulocal timer, but I didn't see a good way to make the bindings > > > > represent the relationship to cpus or to make the driver handle irqs > > > > correctly for such a setup, so I'd need a viable proposal for how that > > > > could be done to even consider such an approach. > > > > > > Yeah, there's not really a standard way to map per cpu blocks to cpus. > > > We could, but doesn't really seem necessary here. > > > > > > For the irqs, percpu irqs doesn't help you? > > > > What I mean is that, if there were a separate device node and driver > > instance per cpu, they'd all want to register the same irq just to > > handle it on their own cpu, so we'd have a lot of spurious handlers > > running. The right way to model this, I think, would be as a virtual > > irqchip that's the irq parent of all the timer nodes, and that > > multiplexes the real irq to one virq per cpu (where the current cpu id > > becomes the irq number in its irq domain). But that's a lot of virtual > > infrastructure just for the sake of modelling each percpu timer as its > > own DT node and I don't think it makes sense to do it that way. > > I would have thought your interrupt controller did all this. On the ARM > GIC for example, you have the same irq number but there is a per cpu > interface and really N (== # cpus) physical irq lines. I've looked at the ARM GIC code and bindings and I don't see where the per-cpu interrupt interfaces are modelled with multiple interrupt controller nodes or irq domains. It looks to me like it just uses a single interrupt controller/domain with percpu irq. Does that match your understanding? Rich -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html