On 10/10/24 9:15 AM, Liu Ying wrote:
On 10/09/2024, Marek Vasut wrote:
The LDB serializer clock operate at either x7 or x14 rate of the input
Isn't it either x7 or 3.5x?
Is it 3.5 for the dual-link LVDS ?
I don't have such a panel right now to test.
[...]
diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c
index 0e4bac7dd04ff..a3a31467fcc85 100644
--- a/drivers/gpu/drm/bridge/fsl-ldb.c
+++ b/drivers/gpu/drm/bridge/fsl-ldb.c
@@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge,
return MODE_OK;
}
+static void fsl_ldb_mode_set(struct drm_bridge *bridge,
+ const struct drm_display_mode *mode,
+ const struct drm_display_mode *adj)
+{
+ struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge);
+ unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock);
+
+ clk_set_rate(fsl_ldb->clk, requested_link_freq);
The mode_set callback won't be called when only crtc_state->active
is changed from false to true in an atomic commit, e.g., blanking
the emulated fbdev first and then unblanking it. So, in this case,
the clk_set_rate() in fsl_ldb_atomic_enable() is still called after
those from mxsfb_kms or lcdif_kms.
Also, it doesn't look neat to call clk_set_rate() from both mode_set
callback and atomic_enable callback.
I agree the mode_set callback is not the best place for this.
Do you know of a better callback where to do this ? I couldn't find one.
The idea is to assign a reasonable PLL clock rate in DT to make
display drivers' life easier, especially for i.MX8MP where LDB,
Samsung MIPI DSI may use a single PLL at the same time.
I would really like to avoid setting arbitrary clock in DT, esp. if it
can be avoided. And it surely can be avoided for this simple use case.