Re: [PATCH v2 13/15] phy: qcom-qmp-pcie: add support for pipediv2 clock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20/10/2022 12:05, Johan Hovold wrote:
On Thu, Oct 20, 2022 at 11:31:35AM +0300, Dmitry Baryshkov wrote:
On 19/10/2022 14:35, Johan Hovold wrote:
Some QMP PHYs have a second fixed-divider pipe clock that needs to be
enabled along with the pipe clock.

Add support for an optional "pipediv2" clock.

Signed-off-by: Johan Hovold <johan+linaro@xxxxxxxxxx>
---
   drivers/phy/qualcomm/phy-qcom-qmp-pcie.c | 42 ++++++++++++++++++++----
   1 file changed, 36 insertions(+), 6 deletions(-)

I still think that the attached patch is somewhat simpler. Diffstat
supports that idea:

$ diffstat /tmp/pipe.diff
   phy-qcom-qmp-pcie.c |   26 ++++++++++++++++----------
   1 file changed, 16 insertions(+), 10 deletions(-)

It's not just about LoC.
Yes, I'm speaking this after having cleaned up several open-coded
versions of clk_bulk_foo from the drm/msm code. It typically starts with
the 'just another clock' story, and then suddenly they are all over the
code.

But you don't start using the bulk API when you have one clock. Two,
maybe. Three, sure. It's a decision that needs to be done on a case-by
case basis, and pipe clocks for the QMP block is different from general
interface clocks (which are more likely to be extended). (And of course
the clocks would need to be independent in the first place.)

Here's your example diff inline:

diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
index 9c8e009033f1..a148b143dd90 100644
--- a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+++ b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
@@ -1378,8 +1378,10 @@ struct qmp_pcie {
  	void __iomem *tx2;
  	void __iomem *rx2;
- struct clk *pipe_clk;
+	struct clk_bulk_data *pipe_clks;
+	int num_pipe_clks;
  	struct clk_bulk_data *clks;
+
  	struct reset_control_bulk_data *resets;
  	struct regulator_bulk_data *vregs;
@@ -1923,11 +1925,9 @@ static int qmp_pcie_power_on(struct phy *phy)
  	qmp_pcie_init_registers(qmp, &cfg->tables);
  	qmp_pcie_init_registers(qmp, mode_tables);
- ret = clk_prepare_enable(qmp->pipe_clk);
-	if (ret) {
-		dev_err(qmp->dev, "pipe_clk enable failed err=%d\n", ret);
+	ret = clk_bulk_prepare_enable(qmp->num_pipe_clks, qmp->pipe_clks);
+	if (ret)
  		return ret;
-	}
/* Pull PHY out of reset state */
  	qphy_clrbits(pcs, cfg->regs[QPHY_SW_RESET], SW_RESET);
@@ -1950,7 +1950,7 @@ static int qmp_pcie_power_on(struct phy *phy)
  	return 0;
err_disable_pipe_clk:
-	clk_disable_unprepare(qmp->pipe_clk);
+	clk_bulk_disable_unprepare(qmp->num_pipe_clks, qmp->pipe_clks);
return ret;
  }
@@ -1960,7 +1960,7 @@ static int qmp_pcie_power_off(struct phy *phy)
  	struct qmp_pcie *qmp = phy_get_drvdata(phy);
  	const struct qmp_phy_cfg *cfg = qmp->cfg;
- clk_disable_unprepare(qmp->pipe_clk);
+	clk_bulk_disable_unprepare(qmp->num_pipe_clks, qmp->pipe_clks);
/* PHY reset */
  	qphy_setbits(qmp->pcs, cfg->regs[QPHY_SW_RESET], SW_RESET);

The above is pretty much identical, expect for using the bulk API
instead of the very straight-forward pipe-clock helpers.

@@ -2154,6 +2154,7 @@ static int qmp_pcie_parse_dt_legacy(struct qmp_pcie *qmp, struct device_node *np
  	struct platform_device *pdev = to_platform_device(qmp->dev);
  	const struct qmp_phy_cfg *cfg = qmp->cfg;
  	struct device *dev = qmp->dev;
+	struct clk *clk;
qmp->serdes = devm_platform_ioremap_resource(pdev, 0);
  	if (IS_ERR(qmp->serdes))
@@ -2206,12 +2207,17 @@ static int qmp_pcie_parse_dt_legacy(struct qmp_pcie *qmp, struct device_node *np
  		}
  	}
- qmp->pipe_clk = devm_get_clk_from_child(dev, np, NULL);
-	if (IS_ERR(qmp->pipe_clk)) {
-		return dev_err_probe(dev, PTR_ERR(qmp->pipe_clk),
+	clk = devm_get_clk_from_child(dev, np, NULL);
+	if (IS_ERR(clk)) {
+		return dev_err_probe(dev, PTR_ERR(clk),
  				     "failed to get pipe clock\n");
  	}
+ qmp->num_pipe_clks = 1;
+	qmp->pipe_clks = devm_kcalloc(dev, qmp->num_pipe_clks,
+				      sizeof(*qmp->pipe_clks), GFP_KERNEL);
+	qmp->pipe_clks[0].clk = clk;

So here you're poking at bulk API internals and forgot to set the id
string, which the implementation uses.

I didn't forget, I just skipped setting it. Hmm. I thought that it is used only for clk_bulk_get. But after checking, it seems it's also used for error messages. Mea culpa.

But it's not that I was poking into the internals. These items are in the public header.


For reasons like this, and the fact that will likely never have a third
pipe clock, I'm reluctant to using the bulk API.

Let's resort to the maintainer opinion then.


+
  	return 0;
  }
Johan

--
With best wishes
Dmitry




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux