Hi Rob, On 1/18/19 21:16, Rob Clark wrote: > On Fri, Jan 18, 2019 at 1:06 PM Doug Anderson <dianders@xxxxxxxxxxxx> wrote: >> >> Hi, >> >> On Thu, Dec 20, 2018 at 9:30 AM Jordan Crouse <jcrouse@xxxxxxxxxxxxxx> wrote: >>> >>> Try to get the interconnect path for the GPU and vote for the maximum >>> bandwidth to support all frequencies. This is needed for performance. >>> Later we will want to scale the bandwidth based on the frequency to >>> also optimize for power but that will require some device tree >>> infrastructure that does not yet exist. >>> >>> v5: Remove hardcoded interconnect name and just use the default >> >> nit: ${SUBJECT} says v3, but this is v5. >> >> I'll put in my usual plug for considering "patman" to help post >> patches. Even though it lives in the u-boot git repo it's still a gem >> for kernel work. >> <http://git.denx.de/?p=u-boot.git;a=blob;f=tools/patman/README> >> >> >>> @@ -85,6 +89,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index) >>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret); >>> >>> gmu->freq = gmu->gpu_freqs[index]; >>> + >>> + /* >>> + * Eventually we will want to scale the path vote with the frequency but >>> + * for now leave it at max so that the performance is nominal. >>> + */ >>> + icc_set(gpu->icc_path, 0, MBps_to_icc(7216)); >> >> You'll need to change icc_set() here to icc_set_bw() to match v13, AKA: >> >> - https://patchwork.kernel.org/patch/10766335/ >> - https://lkml.kernel.org/r/20190116161103.6937-2-georgi.djakov@xxxxxxxxxx >> >> >>> @@ -695,6 +707,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) >>> if (ret) >>> goto out; >>> >>> + /* Set the bus quota to a reasonable value for boot */ >>> + icc_set(gpu->icc_path, 0, MBps_to_icc(3072)); >> >> This will also need to change to icc_set_bw() >> >> >>> @@ -781,6 +798,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) >>> /* Tell RPMh to power off the GPU */ >>> a6xx_rpmh_stop(gmu); >>> >>> + /* Remove the bus vote */ >>> + icc_set(gpu->icc_path, 0, 0); >> >> This will also need to change to icc_set_bw() >> >> >> I have the same questions for this series that I had in response to >> the email ("[v5 2/3] drm/msm/dpu: Integrate interconnect API in MDSS") >> <https://lkml.kernel.org/r/CAD=FV=XUeMTGH+CDwGs3PfK4igdQrCbwucw7_2ViBc4i7grvxg@xxxxxxxxxxxxxx> >> >> >> Copy / pasting here (with minor name changes) so folks don't have to >> follow links / search email. >> >> == >> >> I'm curious what the plan is for landing this series. Rob / Gerogi: >> do you have any preference? Options I'd imagine: >> >> A) Wait until interconnect lands (in 5.1?) and land this through >> msm-next in the version after (5.2?) >> >> B) Georgi provides an immutable branch for interconnect when his lands >> (assuming he's landing via pull request) and that gets pulled into the >> the relevant drm tree. >> >> C) Rob Acks this series and indicates that it should go in through >> Gerogi's tree (probably only works if Georgi plans to send a pull >> request). If we're going this route then (IIUC) we'd want to land >> this in Gerogi's tree sooner rather than later so it can get some bake >> time? NOTE: as per my prior reply, I believe Rob has already Acked >> this patch. >> > > I'm ok to ack and have it land via Georgi's tree, if Georgi wants to > do that. Or otherwise, I could maybe coordinate w/ airlied to send a > 2nd late msm-next pr including the gpu and display interconnect > patches. I'm fine either way. But it would be nice if both patches (this one and the dt-bindings go together. The v6 of this patch applies cleanly to my tree, but the next one (2/3) with the dt-bindings doesn't. Thanks, Georgi _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel