On Wed, Oct 26 2016 at 05:17:34 PM, Vineet Gupta <Vineet.Gupta1 at synopsys.com> wrote: > On 10/26/2016 07:05 AM, Marc Zyngier wrote: >> It definitely feels weird to encode the interrupt affinity in the DT >> (the kernel and possible userspace usually know much better than the >> firmware). What is the actual reason for storing the affinity there? > > The IDU intc supports various interrupt distribution modes (Round > Robin, send to one cpu only etc) whcih in turn map to affinity > setting. When doing the DT binding, we decided to add that this to DT > to get the "seed" value for affinity - which user could optionally > changed after boot. This seemed like a benign design choice at the > time. Right. But is this initial setting something that the kernel has to absolutely honor? The usual behavior is to let kernel pick something sensible, and let the user mess with it afterwards. Is there any part of the kernel that would otherwise depend on this affinity being set to a particular mode? If the answer is "none", then I believe we can safely ignore that part of the binding (and maybe deprecate it in the documentation). Thanks, M. -- Jazz is not dead. It just smells funny.