Re: [PATCH v2 RESEND 2/2] ARM: local timers: add timer support using IO mapped register

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 28, 2012 at 06:15:53PM +0100, Lorenzo Pieralisi wrote:
> On Fri, Sep 28, 2012 at 04:57:46PM +0100, Dave Martin wrote:
> > [ Note: please aim to CC devicetree-discuss@xxxxxxxxxxxxxxxx with any
> > patches or bindings relevant to device tree. ]
> > 
> > [ Lorenzo, there's a question for you further down this mail. ]
> 
> [...]
> 
> > > > > +  If using the memory mapped interface, list the interrupts for each core,
> > > > > +  starting with core 0.
> > > 
> > > I take it that core 0 means physical cpu 0 (i.e. MPIDR.Aff{2,1,0} == 0)?
> > 
> > Lorenzo, should we have a standard way of referring to CPUs and topology
> > nodes documented as part of the topology bindings?  We certainly need
> > rules of some kind, since when the topology is non-trivial there is no
> > well-defined "first" CPU, nor any single correct order in which to list
> > CPUs.
> 
> I think, and that's just my opinion, that whatever solution we go for to
> describe the topology must contain the information needed by all kernel
> subsystems to retrieve HW information. I do not think we should document
> how devices connect to CPU(s)/Cluster(s) in the topology bindings per-se,
> since those are properties that belong to device nodes.

Well, I guess the other approach is to establish a firm precedent, which
means that we need to watch carefully for people proposing new bindings
which refer to the topology in inconsistent ways.

> 
> There must be a common way for all devices to link to the topology, though.
> 
> The topology must be descriptive enough to cater for all required cases
> and that's what Mark with PMU and all of us are trying to come up with, a solid
> way to represent with DT the topology of current and future ARM systems.
> 
> First idea I implemented and related LAK posting:
> 
> http://lists.infradead.org/pipermail/linux-arm-kernel/2012-January/080873.html
> 
> Are "cluster" nodes really needed or "cpu" nodes are enough ? I do not
> know, let's get this discussion started, that's all I need.

One thing which now occurs to me on this point it that if we want to describe
the CCI properly in the DT (yes) then we need a way to describe the mapping
between clusters and CCI slave ports.  Currently that knowledge just has to
be a hard-coded hack somewhere: it's not probeable at all.

I'm not sure how we do that, or how we describe the cache topology, without
the clusters being explicit in the DT

...unless you already have ideas ?

Cheers
---Dave

> But definitely declaring IRQs in physical CPU id order (and mind, as you say,
> physical CPU ids, ie MPIDR, can be sparsely populated) and initializing them
> *thinking* the order is the logical one is plainly broken.
> 
> > The topology may also be sparsely populated (e.g.,
> > Aff[2,1,0] in { (0,0,0), (0,0,1), (0,1,0), (0,1,1), (0,1,2), (0,1,3) })
> > 
> > It would be bad if different driver bindings end up solving this in
> > different ways (even non-broken ways)
> 
> Yes, I agree and code that relies on any temporary work-around to tackle
> this problem should not be merged before we set in stone proper topology
> bindings.
> 
> Lorenzo
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux