Hi Stephan, On Wed, Feb 19, 2025 at 11:40:16AM +0100, Stephan Gerhold wrote: > Hi Andi, > > On Wed, Feb 19, 2025 at 12:02:06AM +0100, Andi Shyti wrote: > > > > sorry for the very late reply here. Just one question. > > > > Thanks for bringing the patch back up after such a long time. I've been > meaning to resend it, but never found the time to do so... :-) We have a long list of forgotten patches that belong to the far past. I'm trying to revive them. > > > downstream/vendor driver [1]. Due to lack of documentation about the > > > interconnect setup/behavior I cannot say exactly if this is right. > > > Unfortunately, this is not implemented very consistently downstream... > > > > Can we have someone from Qualcomm or Linaro taking a peak here? > > > > I suppose I count as someone from Linaro nowadays. However, since this > driver is only used on really old platforms nowadays, I'm not sure where > to look or who to ask... > > At the end, the whole bus scaling/interconnect is always somewhat > "imprecise". There is no clear "correct" or "wrong", since the ideal > bandwidth depends heavily on the actual use case that we are not aware > of in the driver. There is also overhead when voting for bandwidth, > since that can take a couple of milliseconds. > > The most important part is that we vote for any bandwidth at all, since > otherwise the bus path could potentially be completely off and it would > get stuck. My patch implements one of the approaches that was used in > the downstream/vendor drivers and matches what we already have upstream > in the corresponding spi-qup driver. I think it's "good enough". If > someone ever wants to fine tune this based on actual measurements they > can just submit an incremental patch. Right now this series is blocking > adding the necessary properties in the device tree and that's not good. > > Surprisingly this series still applies cleanly on top of linux-next. The > dt-bindings have review tags and there was plenty of time for someone > else to chime in for the driver. So maybe you can just pick them up? :D Yes, I already tested them. > > > [1]: https://git.codelinaro.org/clo/la/kernel/msm-3.10/-/commit/67174e2624ea64814231e7e1e4af83fd882302c6 > > > > ... > > > > > @@ -1745,6 +1775,11 @@ static int qup_i2c_probe(struct platform_device *pdev) > > > goto fail_dma; > > > } > > > qup->is_dma = true; > > > + > > > + qup->icc_path = devm_of_icc_get(&pdev->dev, NULL); > > > + if (IS_ERR(qup->icc_path)) > > > + return dev_err_probe(&pdev->dev, PTR_ERR(qup->icc_path), > > > + "failed to get interconnect path\n"); > > > > Can we live without it if it fails? > > > > of_icc_get() returns NULL if the interconnect API is disabled, or if > "interconnects" is not defined in the device tree, so this is already > handled. If "interconnects" is enabled and defined, I think we shouldn't > ignore errors. Therefore, this should work as intended. yes, because qup_i2c_vote_bw() checks inside for NULL values. My idea was that: if (IS_ERR(...)) { dev_warn(...) qup->icc_path = NULL; } and let things work. Anyway, if you want to keep it this way, fine with me, I don't have a strong opinion, rather than a preference to keep going. Thanks, Andi > Let me know if I should resend the patch or if you can apply it > directly. > > Thanks, > Stephan