Re: [PATCH 5/5] block: revert back to synchronous request_queue removal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 14, 2020 at 08:47:25AM -0700, Christoph Hellwig wrote:
> On Tue, Apr 14, 2020 at 04:19:02AM +0000, Luis Chamberlain wrote:
> > Commit dc9edc44de6c ("block: Fix a blk_exit_rl() regression") merged on
> > v4.12 moved the work behind blk_release_queue() into a workqueue after a
> > splat floated around which indicated some work on blk_release_queue()
> > could sleep in blk_exit_rl(). This splat would be possible when a driver
> > called blk_put_queue() or blk_cleanup_queue() (which calls blk_put_queue()
> > as its final call) from an atomic context.
> > 
> > blk_put_queue() decrements the refcount for the request_queue
> > kobject, and upon reaching 0 blk_release_queue() is called. Although
> > blk_exit_rl() is now removed through commit db6d9952356 ("block: remove
> > request_list code"), we reserve the right to be able to sleep within
> > blk_release_queue() context. If you see no other way and *have* be
> > in atomic context when you driver calls the last blk_put_queue()
> > you can always just increase your block device's reference count with
> > bdgrab() as this can be done in atomic context and the request_queue
> > removal would be left to upper layers later. We document this bit of
> > tribal knowledge as well now, and adjust kdoc format a bit.
> > 
> > We revert back to synchronous request_queue removal because asynchronous
> > removal creates a regression with expected userspace interaction with
> > several drivers. An example is when removing the loopback driver and
> > issues ioctl from userspace to do so, upon return and if successful one
> > expects the device to be removed. Moving to asynchronous request_queue
> > removal could have broken many scripts which relied on the removal to
> > have been completed if there was no error.
> > 
> > Using asynchronous request_queue removal however has helped us find
> > other bugs, in the future we can test what could break with this
> > arrangement by enabling CONFIG_DEBUG_KOBJECT_RELEASE.
> > 
> > Cc: Bart Van Assche <bvanassche@xxxxxxx>
> > Cc: Omar Sandoval <osandov@xxxxxx>
> > Cc: Hannes Reinecke <hare@xxxxxxxx>
> > Cc: Nicolai Stange <nstange@xxxxxxx>
> > Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
> > Cc: Michal Hocko <mhocko@xxxxxxxxxx>
> > Cc: yu kuai <yukuai3@xxxxxxxxxx>
> > Suggested-by: Nicolai Stange <nstange@xxxxxxx>
> > Fixes: dc9edc44de6c ("block: Fix a blk_exit_rl() regression")
> > Signed-off-by: Luis Chamberlain <mcgrof@xxxxxxxxxx>
> > ---
> >  block/blk-core.c       | 19 ++++++++++++++++++-
> >  block/blk-sysfs.c      | 38 +++++++++++++++++---------------------
> >  include/linux/blkdev.h |  2 --
> >  3 files changed, 35 insertions(+), 24 deletions(-)
> > 
> > diff --git a/block/blk-core.c b/block/blk-core.c
> > index 5aaae7a1b338..8346c7c59ee6 100644
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
> > @@ -301,6 +301,17 @@ void blk_clear_pm_only(struct request_queue *q)
> >  }
> >  EXPORT_SYMBOL_GPL(blk_clear_pm_only);
> >  
> > +/**
> > + * blk_put_queue - decrement the request_queue refcount
> > + *
> > + * Decrements the refcount to the request_queue kobject, when this reaches
> > + * 0 we'll have blk_release_queue() called. You should avoid calling
> > + * this function in atomic context but if you really have to ensure you
> > + * first refcount the block device with bdgrab() / bdput() so that the
> > + * last decrement happens in blk_cleanup_queue().
> > + *
> > + * @q: the request_queue structure to decrement the refcount for
> > + */
> >  void blk_put_queue(struct request_queue *q)
> >  {
> >  	kobject_put(&q->kobj);
> > @@ -328,10 +339,16 @@ EXPORT_SYMBOL_GPL(blk_set_queue_dying);
> >  
> >  /**
> >   * blk_cleanup_queue - shutdown a request queue
> > - * @q: request queue to shutdown
> >   *
> >   * Mark @q DYING, drain all pending requests, mark @q DEAD, destroy and
> >   * put it.  All future requests will be failed immediately with -ENODEV.
> > + *
> > + * You should not call this function in atomic context. If you need to
> > + * refcount a request_queue in atomic context, instead refcount the
> > + * block device with bdgrab() / bdput().
> 
> I think this needs a WARN_ON thrown in to enforece the calling context.

I considered adding a might_sleep() but upon review with Bart, he noted
that this function already has a mutex_lock(), and if you look under the
hood of mutex_lock(), it has a might_sleep() at the very top. The
warning then is implicit.

> > + *
> > + * @q: request queue to shutdown
> 
> Moving the argument documentation seems against the usual kerneldoc
> style.

Would you look at that, Documentation/doc-guide/kernel-doc.rst does
say to keep the argument at the top as it was in place before, OK will
revert that. Sorry, I used include/net/mac80211.h as my base for style.

> Otherwise this look good, I hope it sticks :)

I hope that the kdocs / might_sleep() sprinkled should make it stick now.
But hey, this uncovered wonderful obscure bugs, it was fun. I'll add a
selftest also later to ensure we don't regress on some of this later
once again.

  Luis



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux