On Wed, Nov 25, 2020 at 01:37:06PM -0500, Lyude Paul wrote: > On Tue, 2020-11-03 at 14:49 -0500, Jeremy Cline wrote: > > Rather than protecting the nouveau_drm clients list with the lock within > > the "client" nouveau_cli, add a dedicated lock to serialize access to > > the list. This is both clearer and necessary to avoid lockdep being > > upset with us when we need to iterate through all the clients in the > > list and potentially lock their mutex, which is the same class as the > > lock protecting the entire list. > > > > Signed-off-by: Jeremy Cline <jcline@xxxxxxxxxx> > > --- > > drivers/gpu/drm/nouveau/nouveau_drm.c | 9 +++++---- > > drivers/gpu/drm/nouveau/nouveau_drv.h | 5 +++++ > > 2 files changed, 10 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c > > b/drivers/gpu/drm/nouveau/nouveau_drm.c > > index 4fe4d664c5f2..d182b877258a 100644 > > --- a/drivers/gpu/drm/nouveau/nouveau_drm.c > > +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c > > @@ -557,6 +557,7 @@ nouveau_drm_device_init(struct drm_device *dev) > > nvkm_dbgopt(nouveau_debug, "DRM"); > > > > INIT_LIST_HEAD(&drm->clients); > > + mutex_init(&drm->clients_lock); > > Looks like you forgot to hook up mutex_destroy() somewhere. Note there's > actually plenty of code in nouveau right now that forgets to do this, but at > some point we should probably fix that and avoid adding more spots where there's > no mutex_destroy(). > I'm guilty of having looked at the existing locking init code in nouveau and modeling this work accordingly. I'll send out a fix for this shortly and look at tidying up the rest of the locks in a separate series. Thanks! > > spin_lock_init(&drm->tile.lock); > > > > /* workaround an odd issue on nvc1 by disabling the device's > > @@ -1089,9 +1090,9 @@ nouveau_drm_open(struct drm_device *dev, struct drm_file > > *fpriv) > > > > fpriv->driver_priv = cli; > > > > - mutex_lock(&drm->client.mutex); > > + mutex_lock(&drm->clients_lock); > > list_add(&cli->head, &drm->clients); > > - mutex_unlock(&drm->client.mutex); > > + mutex_unlock(&drm->clients_lock); > > > > done: > > if (ret && cli) { > > @@ -1117,9 +1118,9 @@ nouveau_drm_postclose(struct drm_device *dev, struct > > drm_file *fpriv) > > nouveau_abi16_fini(cli->abi16); > > mutex_unlock(&cli->mutex); > > > > - mutex_lock(&drm->client.mutex); > > + mutex_lock(&drm->clients_lock); > > list_del(&cli->head); > > - mutex_unlock(&drm->client.mutex); > > + mutex_unlock(&drm->clients_lock); > > > > nouveau_cli_fini(cli); > > kfree(cli); > > diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h > > b/drivers/gpu/drm/nouveau/nouveau_drv.h > > index 9d04d1b36434..550e5f335146 100644 > > --- a/drivers/gpu/drm/nouveau/nouveau_drv.h > > +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h > > @@ -141,6 +141,11 @@ struct nouveau_drm { > > > > struct list_head clients; > > > > + /** > > + * @clients_lock: Protects access to the @clients list of &struct > > nouveau_cli. > > + */ > > + struct mutex clients_lock; > > + > > u8 old_pm_cap; > > > > struct { > > -- > Sincerely, > Lyude Paul (she/her) > Software Engineer at Red Hat > > Note: I deal with a lot of emails and have a lot of bugs on my plate. If you've > asked me a question, are waiting for a review/merge on a patch, etc. and I > haven't responded in a while, please feel free to send me another email to check > on my status. I don't bite! > _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel