On 10/11/24 8:49 AM, Liu Ying wrote:
On 10/11/2024, Marek Vasut wrote:
On 10/10/24 9:15 AM, Liu Ying wrote:
On 10/09/2024, Marek Vasut wrote:
The LDB serializer clock operate at either x7 or x14 rate of the input
Isn't it either x7 or 3.5x?
Is it 3.5 for the dual-link LVDS ?
Yes.
static unsigned long fsl_ldb_link_frequency(struct fsl_ldb *fsl_ldb, int clock)
{
if (fsl_ldb_is_dual(fsl_ldb))
return clock * 3500;
else
return clock * 7000;
}
I don't have such a panel right now to test.
You can add a panel DT node for test to see the relationship
between the clocks, without a real dual-link LVDS panel.
I'll take your word for this.
[...]
diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c
index 0e4bac7dd04ff..a3a31467fcc85 100644
--- a/drivers/gpu/drm/bridge/fsl-ldb.c
+++ b/drivers/gpu/drm/bridge/fsl-ldb.c
@@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge,
return MODE_OK;
}
+static void fsl_ldb_mode_set(struct drm_bridge *bridge,
+ const struct drm_display_mode *mode,
+ const struct drm_display_mode *adj)
+{
+ struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge);
+ unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock);
+
+ clk_set_rate(fsl_ldb->clk, requested_link_freq);
The mode_set callback won't be called when only crtc_state->active
is changed from false to true in an atomic commit, e.g., blanking
the emulated fbdev first and then unblanking it. So, in this case,
the clk_set_rate() in fsl_ldb_atomic_enable() is still called after
those from mxsfb_kms or lcdif_kms.
Also, it doesn't look neat to call clk_set_rate() from both mode_set
callback and atomic_enable callback.
I agree the mode_set callback is not the best place for this.
Do you know of a better callback where to do this ? I couldn't find one.
A wild idea is to change the order between the CRTC atomic_enable
callback and the bridge one by implementing your own
atomic_commit_tail... I don't think there is any place to do this
other than atomic_enable callback.
I will give that a try, thanks.
Anyway, I don't think it is necessary to manage the clk_set_rate()
function calls between this driver and mxsfb_kms or lcdif_kms
because "video_pll1" clock rate is supposed to be assigned in DT...
I disagree with this part. I believe the assignment of clock in DT is
only a temporary workaround which should be removed. The drivers should
be able to figure out and set the clock tree configuration.
The idea is to assign a reasonable PLL clock rate in DT to make
display drivers' life easier, especially for i.MX8MP where LDB,
Samsung MIPI DSI may use a single PLL at the same time.
I would really like to avoid setting arbitrary clock in DT, esp. if it can be avoided. And it surely can be avoided for this simple use case.
... just like I said in patch 1/2, "video_pll1" clock rate needs
to be x2 "media_ldb" clock rate for dual LVDS link mode. Without
an assigned "video_pll1" clock rate in DT, this driver cannot
achieve that.
This is something the LDB driver can infer from DT and configure the
clock tree accordingly.
And, the i.MX8MP LDB + Samsung MIPI DSI case is
not simple considering using one single PLL and display modes
read from EDID.
You could use separate PLLs for each LCDIF scanout engine in such a
deployment, I already ran into that, so I am aware of it. That is
probably the best way out of such a problem, esp. if accurate pixel
clock are the requirement.
[...]