On 3/28/2024 3:50 PM, Dmitry Baryshkov wrote:
On Thu, 28 Mar 2024 at 23:21, Abhinav Kumar <quic_abhinavk@xxxxxxxxxxx> wrote:
On 3/28/2024 1:58 PM, Stephen Boyd wrote:
Quoting Abhinav Kumar (2024-03-28 13:24:34)
+ Johan and Bjorn for FYI
On 3/28/2024 1:04 PM, Kuogee Hsieh wrote:
For internal HPD case, hpd_event_thread is created to handle HPD
interrupts generated by HPD block of DP controller. It converts
HPD interrupts into events and executed them under hpd_event_thread
context. For external HPD case, HPD events is delivered by way of
dp_bridge_hpd_notify() under thread context. Since they are executed
under thread context already, there is no reason to hand over those
events to hpd_event_thread. Hence dp_hpd_plug_handle() and
dp_hpd_unplug_hanlde() are called directly at dp_bridge_hpd_notify().
Signed-off-by: Kuogee Hsieh <quic_khsieh@xxxxxxxxxxx>
---
drivers/gpu/drm/msm/dp/dp_display.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
Fixes: 542b37efc20e ("drm/msm/dp: Implement hpd_notify()")
Is this a bug fix or an optimization? The commit text doesn't tell me.
I would say both.
optimization as it avoids the need to go through the hpd_event thread
processing.
bug fix because once you go through the hpd event thread processing it
exposes and often breaks the already fragile hpd handling state machine
which can be avoided in this case.
Please add a description for the particular issue that was observed
and how it is fixed by the patch.
Otherwise consider there to be an implicit NAK for all HPD-related
patches unless it is a series that moves link training to the enable
path and drops the HPD state machine completely.
I really mean it. We should stop beating a dead horse unless there is
a grave bug that must be fixed.
I think the commit message is explaining the issue well enough.
This was not fixing any issue we saw to explain you the exact scenario
of things which happened but this is just from code walkthrough.
Like kuogee wrote, hpd event thread was there so handle events coming
out of the hpd_isr for internal hpd cases. For the hpd_notify coming
from pmic_glink or any other extnernal hpd cases, there is no need to
put this through the hpd event thread because this will only make things
worse of exposing the race conditions of the state machine.
Moving link training to enable and removal of hpd event thread will be
worked on but delaying obvious things we can fix does not make sense.
Looks right to me,
Reviewed-by: Abhinav Kumar <quic_abhinavk@xxxxxxxxxxx>