On Wed, Feb 11, 2015 at 3:29 AM, Mark Rutland <mark.rutland@xxxxxxx> wrote: > On Thu, Jan 29, 2015 at 03:58:42AM +0000, Suman Anna wrote: >> On 01/22/2015 12:56 PM, Mark Rutland wrote: [..] >> > That's the only way I would expect this to possibly remain a stable >> > over time, and it's the entire reason for #hwlock-cells, no? >> > >> > How do you expect the other components sharing the hwspinlocks to be >> > described? >> >> Yes indeed, this is what any of the clients will use on Linux. But >> this is not necessarily the semantics for exchanging hwlocks with the >> other processor(s) which is where the global id space comes into >> picture. > > I did try to consider that above. Rather than thinking about the > numbering as "global", think of it as unique within the a given pool > shared between processors. That's what the "poolN" names are about > above. > > That way you can dynamically allocate within the pool and know that > Linux and the SW on the other processors will use the same ID. You can > have pools that span multiple hwlock hardware blocks, and you can have > multiple separate pools in operation at once. > > Surely that covers the cases you care about? > > If using names is clunky, we could instead have a pool-hwlocks property > for that purpose. > Just to make I understand your suggestion. We would have the communication entity list all the potential hwlocks (and gpios etc) that it can share and the key to be communicated would then basically be the index in that list? Like: awesome-hub { pool-hwlocks = <&a 1>, <&a 3>, <&b 5>; }; And a communicated "lock 2" would mean lock 3 from block a? This would make it possible to describe what locks are available in this "allocation pool" and would keep such allocation logic out from the hwlock core - as the awesome-hub driver could simply trial and error (with some logic) through the list. Is this understanding correct? Regards, Bjorn -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html