Re: [PATCH] of: allocate / free phandle cache outside of the devtree_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 12, 2019 at 3:10 AM Sebastian Andrzej Siewior
<bigeasy@xxxxxxxxxxxxx> wrote:
>
> On 2019-11-11 21:35:35 [-0600], Rob Herring wrote:
> > >    28d0e36bf9686 ("OF: Fixup resursive locking code paths")
> > >    d6d3c4e656513 ("OF: convert devtree lock from rw_lock to raw spinlock")
> >
> > So to summarize, we changed mainline to fix RT which then broke RT. :)
>
> correct, but we were good until v4.17-rc1 :)
>
> > > I've been looking into making devtree_lock a spinlock_t which would
> > > avoid this patch. I haven't seen an issue during boot on arm64 even
> > > with hotplug.
> >
> > Did you look into using RCU reader locks any more?
>
> A little bit. The writers, which modify the node, would need to replace
> the whole node. So this is where things got a little complicated.

Why is that exactly? That's pretty much how node and property updates
work anyways though maybe not strictly enforced. The other updates are
atomic flags and ref counts which I assume are fine. The spinlock
protects traversing the tree of nodes and list of properties.
Traversing and updates would need to follow similar semantics as
rculist, right? BTW, there's been some thoughts to move node and
property lists to list_head. We may want to do that rather than trying
to roll our own RCU code.

> Frank wasn't a big fan of it back then and he still wasn't a few weeks
> back.

I guess you spoke at ELCE. RCU seems like the right locking to me as
we're almost entirely reads and I haven't seen another proposal.

> If you two agree to prefer RCU over this patch then I would look more
> into adding RCU into the lookup path. The argument was that this isn't
> time critical. I'm just trying to avoid to replace the locking for
> nothing.
> So, should I come up with a RCU patch?

Lets see what Frank says first.

While this patch is a bit of a band-aid, I don't think it complicates
the situation at all to prevent coming up with a better solution. The
other option is get rid of the memory allocation altogether. My
preference for the cache was a simpler solution that was truly a cache
(i.e. a fixed size that could miss). The performance wasn't quite as
good though.

Rob



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux