Re: [PATCH v2 0/3] QCM2290 LMH

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 26, 2024 at 07:29:17AM +0100, Krzysztof Kozlowski wrote:
> On 26/03/2024 00:01, Nícolas F. R. A. Prado wrote:
> > On Mon, Mar 25, 2024 at 08:59:55PM +0100, Krzysztof Kozlowski wrote:
> >> On 20/03/2024 20:08, Nícolas F. R. A. Prado wrote:
> >>>> Loic Poulain (1):
> >>>>       arm64: dts: qcom: qcm2290: Add LMH node
> >>>>
> >>>>  Documentation/devicetree/bindings/thermal/qcom-lmh.yaml | 12 ++++++++----
> >>>>  arch/arm64/boot/dts/qcom/qcm2290.dtsi                   | 14 +++++++++++++-
> >>>>  drivers/thermal/qcom/lmh.c                              |  3 +++
> >>>>  3 files changed, 24 insertions(+), 5 deletions(-)
> >>>
> >>> Hi,
> >>>
> >>> I've started tracking the results of 'make dtbs_check' on linux-next, and I've
> >>> noticed that on today's next, next-20240320, there's a new warning coming from
> >>> this. The reason is that the DT change has landed, but the binding has not,
> >>> since it goes through a separate tree. I thought the binding was supposed to
> >>> always land before the driver and DT that make use of it, but looking through
> >>
> >> There is no such rule. Of course new binding should be documented in
> >> earlier or the same kernel release cycle as users get in, but it's not a
> >> requirement.
> > 
> > So, after giving the documentation a second look, I found this:
> > 
> > "For new platforms, or additions to existing ones, make dtbs_check should not
> > add any new warnings."
> > 
> > Source: https://www.kernel.org/doc/html/latest/process/maintainer-soc.html#validating-devicetree-files
> 
> It's just "should"...
> 
> > 
> > What is not clear there is what the reference point is: is it on linux-next?
> > Mainline release?
> 
> Does it matter? There was never a new warning introduced by this
> patchset. The patchset itself is correct. No new warnings.
> 
> > 
> > As Konrad pointed out it's tricky (and maybe not worth it) to guarantee this for
> > linux-next. But for mainline release it seems feasible (and IMO the target, as
> > after that stability guarantees should apply).
> 
> I don't believe in such guarantees. Different maintainers apply patches
> differently, especially bindings, so this is beyond our control. Often
> also beyond SoC maintainer control.
> 
> > 
> >>
> >>
> >>> the dt-binding documentation pages I couldn't find anything confirming or
> >>> denying that.
> >>>
> >>> I expect this to happen again in the future, which is why I'm reaching out to
> >>> understand better how to deal with this kind of situation.
> >>
> >> Deal as what to do? Are you asking in terms of maintenance of some
> >> subsystem or sending some patches? In this particular case here, I don't
> >> think there is anything on your side to deal with.
> > 
> > I'm asking what's the most helpful way to you the maintainers for me to report
> > these failures in the future.
> 
> The most effective way is LKP-like or Rob's-bot-like automated replies
> to original email threads, by testing the original patchset on
> linux-next. But Rob's bot is actually doing it, just on different base.
> 
> Other reports, like for cases when only parts of patch is applied, could
> be also useful but I am afraid you will generate way too much of them.
> Binding is supposed to go via subsystem, DTS via SoC, so basically 90%
> of patchsets might have some sort of delays resulting in dtbs_check
> false positive warnings.
> 
> For my SoC I check my trees, mainline and next, and keep adding list of
> exceptions for expected issues. What's useful for Qualcomm? Konrad,

Is that list of exceptions in-tree? If there are known false-positives (issues
that can't be "properly" fixed), they should be public knowledge. And if we all
collaborate on such a list we can remove the noise from dtbs_check's output so
it only contains real regressions and a backlog of issues that can be fixed.

> Bjorn, any thoughts?
> 
> Have in mind that expected warnings can be for entire cycle when dealing
> with technical debt, because DTS goes N+1.
> 
> > 
> > Rob has already automated running dtbs_check for patches coming into the mailing
> > list. And I have set up KernelCI to run dtbs_check on linux-next in order to
> > catch any issues that might slip through, or happen during integration of the
> > trees, etc.
> > 
> > Now, if we agree that dtbs_check regressions on linux-next are acceptable, at
> > least ones like this, where the issue is just synchronization between
> 
> Yes and no. True regressions are not acceptable. Expected intermediate
> regressions as a result of patchset being applying, but not yet fully
> applied, are OK. Expected regressions for intra-cycle-work are also OK.

Got it. So I'll keep KernelCI running dtbs_check and tracking it, but I won't
report failures caused by partially applied series.

> 
> > maintainers, then I can simply not report them in the future. But we should
> > have some point where dtbs_check should not regress, and mainline release seems
> > the reasonable choice, because if we don't then dtbs_check warnings would just
> > keep growing forever.
> 
> I invite therefore to my session:
> https://eoss24.sched.com/event/1aBEf?iframe=no
> We'll see if they keep growing :)

I won't be able to attend EOSS, but will catch the recording later ;)

Thanks,
Nícolas




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux