On Fri, Jul 09, 2021 at 08:49:55AM +0800, Ming Lei wrote: > On Thu, Jul 08, 2021 at 11:15:13AM -0400, Dan Schatzberg wrote: > > On Thu, Jul 08, 2021 at 11:01:54PM +0800, Ming Lei wrote: > > > On Thu, Jul 08, 2021 at 10:16:50AM -0400, Dan Schatzberg wrote: > > > > On Thu, Jul 08, 2021 at 02:58:36PM +0800, Ming Lei wrote: > > > > > On Wed, Jul 07, 2021 at 09:55:34AM -0400, Dan Schatzberg wrote: > > > > > > On Wed, Jul 07, 2021 at 11:19:14AM +0800, Ming Lei wrote: > > > > > > > On Tue, Jul 06, 2021 at 09:55:36AM -0400, Dan Schatzberg wrote: > > > > > > > > On Mon, Jul 05, 2021 at 06:26:07PM +0800, Ming Lei wrote: > > > > > > > > > } > > > > > > > > > + > > > > > > > > > + spin_lock(lock); > > > > > > > > > list_add_tail(&cmd->list_entry, cmd_list); > > > > > > > > > + spin_unlock(lock); > > > > > > > > > queue_work(lo->workqueue, work); > > > > > > > > > - spin_unlock(&lo->lo_work_lock); > > > > > > > > > } > > > > > > > > > > > > > > > > > > static void loop_update_rotational(struct loop_device *lo) > > > > > > > > > @@ -1131,20 +1159,18 @@ static void loop_set_timer(struct loop_device *lo) > > > > > > > > > > > > > > > > > > static void __loop_free_idle_workers(struct loop_device *lo, bool force) > > > > > > > > > { > > > > > > > > > - struct loop_worker *pos, *worker; > > > > > > > > > + struct loop_worker *worker; > > > > > > > > > + unsigned long id; > > > > > > > > > > > > > > > > > > spin_lock(&lo->lo_work_lock); > > > > > > > > > - list_for_each_entry_safe(worker, pos, &lo->idle_worker_list, > > > > > > > > > - idle_list) { > > > > > > > > > + xa_for_each(&lo->workers, id, worker) { > > > > > > > > > if (!force && time_is_after_jiffies(worker->last_ran_at + > > > > > > > > > LOOP_IDLE_WORKER_TIMEOUT)) > > > > > > > > > break; > > > > > > > > > - list_del(&worker->idle_list); > > > > > > > > > - xa_erase(&lo->workers, worker->blkcg_css->id); > > > > > > > > > - css_put(worker->blkcg_css); > > > > > > > > > - kfree(worker); > > > > > > > > > + if (refcount_dec_and_test(&worker->refcnt)) > > > > > > > > > + loop_release_worker(worker); > > > > > > > > > > > > > > > > This one is puzzling to me. Can't you hit this refcount decrement > > > > > > > > superfluously each time the loop timer fires? > > > > > > > > > > > > > > Not sure I get your point. > > > > > > > > > > > > > > As I mentioned above, this one is the counter pair of INIT reference, > > > > > > > but one new lo_cmd may just grab it when queueing rq before erasing the > > > > > > > worker from xarray, so we can't release worker here until the command is > > > > > > > completed. > > > > > > > > > > > > Suppose at this point there's still an outstanding loop_cmd to be > > > > > > serviced for this worker. The refcount_dec_and_test should decrement > > > > > > the refcount and then fail the conditional, not calling > > > > > > loop_release_worker. What happens if __loop_free_idle_workers fires > > > > > > again before the loop_cmd is processed? Won't you decrement the > > > > > > refcount again, and then end up calling loop_release_worker before the > > > > > > loop_cmd is processed? > > > > > > > > > > Good catch! > > > > > > > > > > The following one line change should avoid the issue: > > > > > > > > > > diff --git a/drivers/block/loop.c b/drivers/block/loop.c > > > > > index 146eaa03629b..3cd51bddfec9 100644 > > > > > --- a/drivers/block/loop.c > > > > > +++ b/drivers/block/loop.c > > > > > @@ -980,7 +980,6 @@ static struct loop_worker *loop_alloc_or_get_worker(struct loop_device *lo, > > > > > > > > > > static void loop_release_worker(struct loop_worker *worker) > > > > > { > > > > > - xa_erase(&worker->lo->workers, worker->blkcg_css->id); > > > > > css_put(worker->blkcg_css); > > > > > kfree(worker); > > > > > } > > > > > @@ -1167,6 +1166,7 @@ static void __loop_free_idle_workers(struct loop_device *lo, bool force) > > > > > if (!force && time_is_after_jiffies(worker->last_ran_at + > > > > > LOOP_IDLE_WORKER_TIMEOUT)) > > > > > break; > > > > > + xa_erase(&worker->lo->workers, worker->blkcg_css->id); > > > > > if (refcount_dec_and_test(&worker->refcnt)) > > > > > loop_release_worker(worker); > > > > > } > > > > > > > > Yeah, I think this resolves the issue. You could end up repeatedly > > > > allocating workers for the same blkcg in the event that you're keeping > > > > the worker busy for the entire LOOP_IDLE_WORKER_TIMEOUT (since it only > > > > updates the last_ran_at when idle). You may want to add a racy check > > > > if the refcount is > 1 to avoid that. > > > > > > Given the event is very unlikely to trigger, I think we can live > > > with that. > > > > It doesn't seem unlikely to me - any workload that saturates the > > backing device would keep the loop worker constantly with at least one > > loop_cmd queued and trigger a free and allocate every > > LOOP_IDLE_WORKER_TIMEOUT. Another way to solve this is to just update > > last_ran_at before or after each loop_cmd. In any event, I'll defer to > > your decision, it's not a critical difference. > > Sorry, I missed that ->last_ran_at is only set when the work isn't > pending, then we can cleanup/simplify the reclaim a bit by: > > 1) keep lo->idle_work to be scheduled in 60 period if there is any > active worker allocated, which is scheduled when allocating/reclaiming > one worker Makes sense, and you should have lo_work_lock held at both points so this is safe. > > 2) always set ->last_ran_at after retrieving the worker from xarray, > which can be done lockless via WRITE_ONCE(), and it is cheap Yes, or in loop_process_work, doesn't really matter where you do it so long as it is per-cmd. I think this change alone resolves the issue. > > 3) inside __loop_free_idle_workers(), reclaim one worker only if the > worker is expired and hasn't commands in worker->cmd_list Be careful here - the current locking doesn't allow for this because you don't acquire the per-worker lock in __loop_free_idle_workers, so accessing worker->cmd_list is a data-race. This is why I suggested reading the refcount instead as it can be done without holding a lock. > > > > > > > > > > > > > > I think there might be a separate issue with the locking here though - > > > > you acquire the lo->lo_work_lock in __loop_free_idle_workers and then > > > > check worker->last_ran_at for each worker. However you only protect > > > > the write to worker->last_ran_at (in loop_process_work) with the > > > > worker->lock which I think means there's a potential data race on > > > > worker->last_ran_at. > > > > > > It should be fine since both WRITE and READ on worker->last_ran_at is > > > atomic. Even though the race is triggered, we still can live with that. > > > > True, though in this case I think last_ran_at should be atomic_t with > > atomic_set and atomic_read. > > I think READ_ONCE()/WRITE_ONCE() should be enough, and we can set/get > last_ran_at lockless. Makes sense to me