On Tue, 29 Nov 2022 at 15:00, Hector Martin <marcan@xxxxxxxxx> wrote: > > On 29/11/2022 20.36, Ulf Hansson wrote: > > On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@xxxxxxxxx> wrote: > >> +examples: > >> + - | > >> + // This example shows a single CPU per domain and 2 domains, > >> + // with two p-states per domain. > >> + // Shipping hardware has 2-4 CPUs per domain and 2-6 domains. > >> + cpus { > >> + #address-cells = <2>; > >> + #size-cells = <0>; > >> + > >> + cpu@0 { > >> + compatible = "apple,icestorm"; > >> + device_type = "cpu"; > >> + reg = <0x0 0x0>; > >> + operating-points-v2 = <&ecluster_opp>; > > > > To me, it looks like the operating-points-v2 phandle better belongs in > > the performance-domains provider node. I mean, isn't the OPPs really a > > description of the performance-domain provider? > > > > That said, I suggest we try to extend the generic performance-domain > > binding [1] with an "operating-points-v2". In that way, we should > > instead be able to reference it from this binding. > > > > In fact, that would be very similar to what already exists for the > > generic power-domain binding [2]. I think it would be rather nice to > > follow a similar pattern for the performance-domain binding. > > While I agree with the technical rationale and the proposed approach > being better in principle... > > We're at v5 of bikeshedding this trivial driver's DT binding, and the > comment could've been made at v3. To quote IRC just now: It could and I certainly apologize for that. It's simply been a busy period for me, so I haven't been able to look closer at the DT bindings, until now. > > > this way the machines will be obsolete before things are fully upstreamed > > I think it's long overdue for the kernel community to take a deep look > at itself and its development and review process, because it is quite > honestly insane how pathologically inefficient it is compared to, > basically, every other large and healthy open source project of similar > or even greater impact and scope. > > Cc Linus, because this is for your Mac and I assume you care. We're at > v5 here for this silly driver. Meanwhile, rmk recently threw the towel > on upstreaming macsmc for us. We're trying, and I'll keep trying because > I actually get paid (by very generous donors) to do this, but if I > weren't I'd have given up a long time ago. And while I won't give up, I > can't deny this situation affects my morale and willingness to keep > pushing on upstreaming on a regular basis. > > Meanwhile, OpenBSD has been *shipping* full M1 support for a while now > in official release images (and since Linux is the source of truth for > DT bindings, every time we re-bikeshed it we break their users because > they, quite reasonably, aren't interested in waiting for us Linux > slowpokes to figure it out first). > > Please, let's introspect about this for a moment. Something is deeply > broken if people with 25+ years being an arch maintainer can't get a > 700-line mfd driver upstreamed before giving up. I don't know how we > expect to ever get a Rust GPU driver merged if it takes 6+ versions to > upstream the world's easiest cpufreq hardware. > > - Hector I didn't intend to bikesheed this, while I do understand your valid concerns from the above statements. Instead, my intent was to help, by reviewing. Simply, because I care about this too. If you think incorporating the changes I proposed is a too big deal at this point, let me not stand in the way of applying this. In the end, it's the DT maintainers' decision. Kind regards Uffe