On Thu, 14 Oct 2021 at 19:02, Hector Martin <marcan@xxxxxxxxx> wrote: > > On 14/10/2021 21.55, Ulf Hansson wrote: > > On Thu, 14 Oct 2021 at 13:43, Hector Martin <marcan@xxxxxxxxx> wrote: > >> I was poking around and noticed the OPP core can already integrate with > >> interconnect requirements, so perhaps the memory controller can be an > >> interconnect provider, and the CPU nodes can directly reference it as a > >> consumer? This seems like a more accurate model of what the hardware > >> does, and I think I saw some devices doing this already. > > > > Yeah, that could work too. And, yes, I agree, it may be a better > > description of the HW. > > > >> > >> (only problem is I have no idea of the actual bandwidth numbers involved > >> here... I'll have to run some benchmarks to make sure this isn't just > >> completely dummy data) > >> > > So... I tried getting bandwidth numbers and failed. It seems these > registers don't actually affect peak performance in any measurable way. > I'm also getting almost the same GeekBench scores on macOS with and > without this mechanism enabled, although there is one subtest that seems > to show a measurable difference. > > My current guess is this is something more subtle (latencies? idle > timers and such?) than a performance state. If that is the case, do you > have any ideas as to the best way to model it in Linux? Should we even > bother if it mostly has a minimal performance gain for typical workloads? For latency constraints, we have dev_pm_qos. This will make the genpd governor, to prevent deeper idle states for the device and its corresponding PM domain (genpd). But that doesn't sound like a good fit here. If you are right, it rather sounds like there is some kind of quiescence mode of the memory controller that can be prevented. But I have no clue, of course. :-) > > I'll try to do some latency tests, see if I can make sense of what it's > actually doing. > Kind regards Uffe