Re: [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 21/12/22 22:13, Dmitry Osipenko wrote:
External email: Use caution opening links or attachments


21.12.2022 12:35, Sumit Gupta пишет:


On 20/12/22 23:40, Dmitry Osipenko wrote:
External email: Use caution opening links or attachments


20.12.2022 19:02, Sumit Gupta пишет:
+static int tegra_emc_icc_get_init_bw(struct icc_node *node, u32
*avg, u32 *peak)
+{
+     *avg = 0;
+     *peak = 0;
+
+     return 0;
+}

Looks wrong, you should add ICC support to all the drivers first and
only then enable ICC. I think you added this init_bw() to work around
the fact that ICC isn't supported by T234 drivers.

If get_bw hook is not added then max freq is set due to 'INT_MAX' below.

  void icc_node_add(struct icc_node *node, struct icc_provider *provider)
  {
    ....
    /* get the initial bandwidth values and sync them with hardware */
    if (provider->get_bw) {
          provider->get_bw(node, &node->init_avg, &node->init_peak);
    } else {
          node->init_avg = INT_MAX;
          node->init_peak = INT_MAX;
  }

So, will have to add the empty functions at least.

  static int tegra_emc_icc_get_init_bw(struct icc_node *node, u32 *avg,
u32 *peak)
  {
-       *avg = 0;
-       *peak = 0;
-
         return 0;
  }

Support to all the client drivers can't be added at once as there are
many drivers all with different requirements and handling. This patch
series is the beginning to add the basic interconnect support in new
Tegra SoC's. Support for more clients will be added later one by one or
in batch.

This means that bandwidth management isn't working properly. You should
leave the freq to INT_MAX and fix the missing integer overflows in the
code if any, or read out the BW from FW.

Once you'll enable ICC for all drivers, it will start working.


Referred the below patches and now understand what you mean.

https://patchwork.kernel.org/project/linux-arm-msm/cover/20200825170152.6434-1-georgi.djakov@xxxxxxxxxx/
 https://lore.kernel.org/all/20210912183009.6400-1-digetx@xxxxxxxxx/

There is no MRQ currently in BPMP-FW to get the bandwidth for a client.
So, can't add that in get_bw(). As explained earlier, we are in process adding of adding support for all clients but that may take some time. We can remove the get_bw() calls initializing "*bw = 0" once we have the support added in all clients. Noticed that below drivers also do same.

 $ grep -rB3 "*peak = 0" drivers/interconnect/*
drivers/interconnect/imx/imx.c-static int imx_icc_get_bw(struct icc_node *node, u32 *avg, u32 *peak)
 drivers/interconnect/imx/imx.c-{
 drivers/interconnect/imx/imx.c- *avg = 0;
 drivers/interconnect/imx/imx.c: *peak = 0;
 --
drivers/interconnect/qcom/msm8974.c-static int msm8974_get_bw(struct icc_node *node, u32 *avg, u32 *peak)
 drivers/interconnect/qcom/msm8974.c-{
 drivers/interconnect/qcom/msm8974.c-    *avg = 0;
 drivers/interconnect/qcom/msm8974.c:    *peak = 0;



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux