05.09.2020 13:34, Mikko Perttunen пишет: > With job recovery becoming optional, syncpoints may have a mismatch > between their value and max value when freed. As such, when freeing, > set the max value to the current value of the syncpoint so that it > is in a sane state for the next user. > > Signed-off-by: Mikko Perttunen <mperttunen@xxxxxxxxxx> > --- > drivers/gpu/host1x/syncpt.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/drivers/gpu/host1x/syncpt.c b/drivers/gpu/host1x/syncpt.c > index 2fad8b2a55cc..82ecb4ac387e 100644 > --- a/drivers/gpu/host1x/syncpt.c > +++ b/drivers/gpu/host1x/syncpt.c > @@ -385,6 +385,7 @@ static void syncpt_release(struct kref *ref) > { > struct host1x_syncpt *sp = container_of(ref, struct host1x_syncpt, ref); > > + atomic_set(&sp->max_val, host1x_syncpt_read_min(sp)); > sp->locked = false; > > mutex_lock(&sp->host->syncpt_mutex); > Please note that the sync point state actually needs to be completely reset at the sync point request-time because both downstream fastboot and upstream u-boot [1] are needlessly enabling display VBLANK interrupt that continuously increments sync point #26 during of kernel boot until display controller is reset. [1] https://github.com/u-boot/u-boot/blob/master/drivers/video/tegra.c#L155 Hence once sync point #26 is requested, it will have a dirty state. So far this doesn't have any visible effect because sync points aren't used much.