Frank Rowand <frowand.list@xxxxxxxxx> writes: > On 07/31/18 07:17, Rob Herring wrote: >> On Tue, Jul 31, 2018 at 12:34 AM Michael Ellerman <mpe@xxxxxxxxxxxxxx> wrote: >>> >>> Hi Rob/Frank, >>> >>> I think we might have a problem with the phandle_cache not interacting >>> well with of_detach_node(): >> >> Probably needs a similar fix as this commit did for overlays: >> >> commit b9952b5218added5577e4a3443969bc20884cea9 >> Author: Frank Rowand <frank.rowand@xxxxxxxx> >> Date: Thu Jul 12 14:00:07 2018 -0700 >> >> of: overlay: update phandle cache on overlay apply and remove >> >> A comment in the review of the patch adding the phandle cache said that >> the cache would have to be updated when modules are applied and removed. >> This patch implements the cache updates. >> >> Fixes: 0b3ce78e90fc ("of: cache phandle nodes to reduce cost of >> of_find_node_by_phandle()") >> Reported-by: Alan Tull <atull@xxxxxxxxxx> >> Suggested-by: Alan Tull <atull@xxxxxxxxxx> >> Signed-off-by: Frank Rowand <frank.rowand@xxxxxxxx> >> Signed-off-by: Rob Herring <robh@xxxxxxxxxx> > > Agreed. Sorry about missing the of_detach_node() case. > > >> Really what we need here is an "invalidate phandle" function rather >> than free and re-allocate the whole damn cache. > > The big hammer approach was chosen to avoid the race conditions that > would otherwise occur. OF does not have a locking strategy that > would be able to protect against the races. > > We could maybe implement a slightly smaller hammer by (1) disabling > the cache, (2) invalidate a phandle entry in the cache, (3) re-enable > the cache. That is an off the cuff thought - I would have to look > a little bit more carefully to make sure it would work. > > But I don't see a need to add the complexity of the smaller hammer > or the bigger hammer of proper locking _unless_ we start seeing that > the cache is being freed and re-allocated frequently. For overlays > I don't expect the high frequency because it happens on a per overlay > removal basis (not per node removal basis). For of_detach_node() the > event _is_ on a per node removal basis. Michael, do you expect node > removals to be a frequent event with low latency being important? If > so, a rough guess on what the frequency would be? No in practice we expect them to be fairly rare and in the thousands at most I think. TBH you could just disable the phandle_cache on the first detach and that would be fine by me. I don't think we've ever noticed phandle lookups being much of a problem for us on our hardware. cheers -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html