Re: [PATCH] of: allocate / free phandle cache outside of the devtree_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/12/19 6:48 PM, Rob Herring wrote:
> On Tue, Nov 12, 2019 at 5:46 PM Frank Rowand <frowand.list@xxxxxxxxx> wrote:
>>
>> On 11/12/19 9:55 AM, Rob Herring wrote:
>>> On Tue, Nov 12, 2019 at 3:10 AM Sebastian Andrzej Siewior
>>> <bigeasy@xxxxxxxxxxxxx> wrote:
>>>>
>>>> On 2019-11-11 21:35:35 [-0600], Rob Herring wrote:
>>>>>>    28d0e36bf9686 ("OF: Fixup resursive locking code paths")
>>>>>>    d6d3c4e656513 ("OF: convert devtree lock from rw_lock to raw spinlock")
>>>>>
>>>>> So to summarize, we changed mainline to fix RT which then broke RT. :)
>>>>
>>>> correct, but we were good until v4.17-rc1 :)
>>>>
>>>>>> I've been looking into making devtree_lock a spinlock_t which would
>>>>>> avoid this patch. I haven't seen an issue during boot on arm64 even
>>>>>> with hotplug.
>>>>>
>>>>> Did you look into using RCU reader locks any more?
>>>>
>>>> A little bit. The writers, which modify the node, would need to replace
>>>> the whole node. So this is where things got a little complicated.
>>>
>>> Why is that exactly? That's pretty much how node and property updates
>>> work anyways though maybe not strictly enforced. The other updates are
>>> atomic flags and ref counts which I assume are fine. The spinlock
>>> protects traversing the tree of nodes and list of properties.
>>> Traversing and updates would need to follow similar semantics as
>>> rculist, right? BTW, there's been some thoughts to move node and
>>> property lists to list_head. We may want to do that rather than trying
>>> to roll our own RCU code.
>>>
>>>> Frank wasn't a big fan of it back then and he still wasn't a few weeks
>>>> back.
>>>
>>> I guess you spoke at ELCE. RCU seems like the right locking to me as
>>> we're almost entirely reads and I haven't seen another proposal.
>>>
>>>> If you two agree to prefer RCU over this patch then I would look more
>>>> into adding RCU into the lookup path. The argument was that this isn't
>>>> time critical. I'm just trying to avoid to replace the locking for
>>>> nothing.
>>>> So, should I come up with a RCU patch?
>>>
>>> Lets see what Frank says first.
>>
>> RCU is good for adding list entries, deleting list entries, and updating
>> the value(s) of a list element.
>>
>> RCU is not good for excluding readers of a data structure while multiple
>> entries are being added, deleted, and/or modified, such that the readers
>> only see the state of the data structure either (1) before any changes
>> or (2) after all changes occur.  Overlay application and removal require
>> multiple modifications, with readers seeing either the before state of
>> the after state.
> 
> Do we ensure that currently? The of_mutex provides what you describe,

No, we don't.  I've been saying that our locking is horribly broken and
that is one of the items that needs to be fixed for overlays.  Overlays
weren't contemplated in the design of the devicetree locking architecture.

So I don't want to change over to RCU which continues the same issues.

> but most or all the reader paths don't take the mutex. Even the
> changeset API takes the spinlock a single node or property at a time.
> 
> Rob
> 




[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux