Re: [PATCH 6/6] loop: don't add worker into idle list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 08, 2021 at 10:16:50AM -0400, Dan Schatzberg wrote:
> On Thu, Jul 08, 2021 at 02:58:36PM +0800, Ming Lei wrote:
> > On Wed, Jul 07, 2021 at 09:55:34AM -0400, Dan Schatzberg wrote:
> > > On Wed, Jul 07, 2021 at 11:19:14AM +0800, Ming Lei wrote:
> > > > On Tue, Jul 06, 2021 at 09:55:36AM -0400, Dan Schatzberg wrote:
> > > > > On Mon, Jul 05, 2021 at 06:26:07PM +0800, Ming Lei wrote:
> > > > > >  	}
> > > > > > +
> > > > > > +	spin_lock(lock);
> > > > > >  	list_add_tail(&cmd->list_entry, cmd_list);
> > > > > > +	spin_unlock(lock);
> > > > > >  	queue_work(lo->workqueue, work);
> > > > > > -	spin_unlock(&lo->lo_work_lock);
> > > > > >  }
> > > > > >  
> > > > > >  static void loop_update_rotational(struct loop_device *lo)
> > > > > > @@ -1131,20 +1159,18 @@ static void loop_set_timer(struct loop_device *lo)
> > > > > >  
> > > > > >  static void __loop_free_idle_workers(struct loop_device *lo, bool force)
> > > > > >  {
> > > > > > -	struct loop_worker *pos, *worker;
> > > > > > +	struct loop_worker *worker;
> > > > > > +	unsigned long id;
> > > > > >  
> > > > > >  	spin_lock(&lo->lo_work_lock);
> > > > > > -	list_for_each_entry_safe(worker, pos, &lo->idle_worker_list,
> > > > > > -				idle_list) {
> > > > > > +	xa_for_each(&lo->workers, id, worker) {
> > > > > >  		if (!force && time_is_after_jiffies(worker->last_ran_at +
> > > > > >  						LOOP_IDLE_WORKER_TIMEOUT))
> > > > > >  			break;
> > > > > > -		list_del(&worker->idle_list);
> > > > > > -		xa_erase(&lo->workers, worker->blkcg_css->id);
> > > > > > -		css_put(worker->blkcg_css);
> > > > > > -		kfree(worker);
> > > > > > +		if (refcount_dec_and_test(&worker->refcnt))
> > > > > > +			loop_release_worker(worker);
> > > > > 
> > > > > This one is puzzling to me. Can't you hit this refcount decrement
> > > > > superfluously each time the loop timer fires?
> > > > 
> > > > Not sure I get your point.
> > > > 
> > > > As I mentioned above, this one is the counter pair of INIT reference,
> > > > but one new lo_cmd may just grab it when queueing rq before erasing the
> > > > worker from xarray, so we can't release worker here until the command is
> > > > completed.
> > > 
> > > Suppose at this point there's still an outstanding loop_cmd to be
> > > serviced for this worker. The refcount_dec_and_test should decrement
> > > the refcount and then fail the conditional, not calling
> > > loop_release_worker. What happens if __loop_free_idle_workers fires
> > > again before the loop_cmd is processed? Won't you decrement the
> > > refcount again, and then end up calling loop_release_worker before the
> > > loop_cmd is processed?
> >  
> > Good catch!
> > 
> > The following one line change should avoid the issue:
> > 
> > diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> > index 146eaa03629b..3cd51bddfec9 100644
> > --- a/drivers/block/loop.c
> > +++ b/drivers/block/loop.c
> > @@ -980,7 +980,6 @@ static struct loop_worker *loop_alloc_or_get_worker(struct loop_device *lo,
> >  
> >  static void loop_release_worker(struct loop_worker *worker)
> >  {
> > -	xa_erase(&worker->lo->workers, worker->blkcg_css->id);
> >  	css_put(worker->blkcg_css);
> >  	kfree(worker);
> >  }
> > @@ -1167,6 +1166,7 @@ static void __loop_free_idle_workers(struct loop_device *lo, bool force)
> >  		if (!force && time_is_after_jiffies(worker->last_ran_at +
> >  						LOOP_IDLE_WORKER_TIMEOUT))
> >  			break;
> > +		xa_erase(&worker->lo->workers, worker->blkcg_css->id);
> >  		if (refcount_dec_and_test(&worker->refcnt))
> >  			loop_release_worker(worker);
> >  	}
> 
> Yeah, I think this resolves the issue. You could end up repeatedly
> allocating workers for the same blkcg in the event that you're keeping
> the worker busy for the entire LOOP_IDLE_WORKER_TIMEOUT (since it only
> updates the last_ran_at when idle). You may want to add a racy check
> if the refcount is > 1 to avoid that.

Given the event is very unlikely to trigger, I think we can live
with that.

> 
> I think there might be a separate issue with the locking here though -
> you acquire the lo->lo_work_lock in __loop_free_idle_workers and then
> check worker->last_ran_at for each worker. However you only protect
> the write to worker->last_ran_at (in loop_process_work) with the
> worker->lock which I think means there's a potential data race on
> worker->last_ran_at.

It should be fine since both WRITE and READ on worker->last_ran_at is
atomic. Even though the race is triggered, we still can live with that.


On Thu, Jul 8, 2021 at 10:41 PM Dan Schatzberg <schatzberg.dan@xxxxxxxxx> wrote:
>
> On Thu, Jul 08, 2021 at 02:58:36PM +0800, Ming Lei wrote:
...
> Another thought - do you need to change the kfree here to kfree_rcu?
> I'm concerned about the scenario where loop_queue_work's xa_load finds
> the worker and subsequently __loop_free_idle_workers erases and calls
> loop_release_worker. If the worker is freed then the subsequent
> refcount_inc_not_zero in loop_queue_work would be a use after free.

Good catch, will fix it in next version.


Thanks,
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux