Re: [PATCH v3 1/2] driver core: Introduce device_link_wait_removal()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2024-02-29 at 14:01 +0100, Rafael J. Wysocki wrote:
> On Thu, Feb 29, 2024 at 12:13 PM Nuno Sá <noname.nuno@xxxxxxxxx> wrote:
> > 
> > Hi,
> > 
> > Just copy pasting my previous comments :)
> > 
> > On Thu, 2024-02-29 at 11:52 +0100, Herve Codina wrote:
> > > The commit 80dd33cf72d1 ("drivers: base: Fix device link removal")
> > > introduces a workqueue to release the consumer and supplier devices used
> > > in the devlink.
> > > In the job queued, devices are release and in turn, when all the
> > > references to these devices are dropped, the release function of the
> > > device itself is called.
> > > 
> > > Nothing is present to provide some synchronisation with this workqueue
> > > in order to ensure that all ongoing releasing operations are done and
> > > so, some other operations can be started safely.
> > > 
> > > For instance, in the following sequence:
> > >   1) of_platform_depopulate()
> > >   2) of_overlay_remove()
> > > 
> > > During the step 1, devices are released and related devlinks are removed
> > > (jobs pushed in the workqueue).
> > > During the step 2, OF nodes are destroyed but, without any
> > > synchronisation with devlink removal jobs, of_overlay_remove() can raise
> > > warnings related to missing of_node_put():
> > >   ERROR: memory leak, expected refcount 1 instead of 2
> > > 
> > > Indeed, the missing of_node_put() call is going to be done, too late,
> > > from the workqueue job execution.
> > > 
> > > Introduce device_link_wait_removal() to offer a way to synchronize
> > > operations waiting for the end of devlink removals (i.e. end of
> > > workqueue jobs).
> > > Also, as a flushing operation is done on the workqueue, the workqueue
> > > used is moved from a system-wide workqueue to a local one.
> > > 
> > > Fixes: 80dd33cf72d1 ("drivers: base: Fix device link removal")
> > > Cc: stable@xxxxxxxxxxxxxxx
> > > Signed-off-by: Herve Codina <herve.codina@xxxxxxxxxxx>
> > > ---
> > >  drivers/base/core.c    | 26 +++++++++++++++++++++++---
> > >  include/linux/device.h |  1 +
> > >  2 files changed, 24 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/drivers/base/core.c b/drivers/base/core.c
> > > index d5f4e4aac09b..80d9430856a8 100644
> > > --- a/drivers/base/core.c
> > > +++ b/drivers/base/core.c
> > > @@ -44,6 +44,7 @@ static bool fw_devlink_is_permissive(void);
> > >  static void __fw_devlink_link_to_consumers(struct device *dev);
> > >  static bool fw_devlink_drv_reg_done;
> > >  static bool fw_devlink_best_effort;
> > > +static struct workqueue_struct *device_link_wq;
> > > 
> > >  /**
> > >   * __fwnode_link_add - Create a link between two fwnode_handles.
> > > @@ -532,12 +533,26 @@ static void devlink_dev_release(struct device *dev)
> > >       /*
> > >        * It may take a while to complete this work because of the SRCU
> > >        * synchronization in device_link_release_fn() and if the consumer
> > > or
> > > -      * supplier devices get deleted when it runs, so put it into the
> > > "long"
> > > -      * workqueue.
> > > +      * supplier devices get deleted when it runs, so put it into the
> > > +      * dedicated workqueue.
> > >        */
> > > -     queue_work(system_long_wq, &link->rm_work);
> > > +     queue_work(device_link_wq, &link->rm_work);
> > >  }
> > > 
> > > +/**
> > > + * device_link_wait_removal - Wait for ongoing devlink removal jobs to
> > > terminate
> > > + */
> > > +void device_link_wait_removal(void)
> > > +{
> > > +     /*
> > > +      * devlink removal jobs are queued in the dedicated work queue.
> > > +      * To be sure that all removal jobs are terminated, ensure that any
> > > +      * scheduled work has run to completion.
> > > +      */
> > > +     drain_workqueue(device_link_wq);
> > > +}
> > 
> > I'm still not convinced we can have a recursive call into devlinks removal
> > so I
> > do think flush_workqueue() is enough. I will defer to Saravana though...
> 
> AFAICS, the difference betwee flush_workqueue() and drain_workqueue()
> is the handling of the case when a given work item can queue up itself
> again.  This does not happen here.


Yeah, that's also my understanding...





[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux