Hi Tyler, Thank you for the review comments. My replies are inlined below. > > Scale it by introducing per-device lock: lo_mutex that proctests > > field in struct loop_device. Keep loop_ctl_mutex to protect global > > s/proctests field/protects the fields/ OK > > @@ -1890,22 +1890,23 @@ static int lo_open(struct block_device *bdev, fmode_t mode) > > return err; > > lo = bdev->bd_disk->private_data; > > if (!lo) { > > - err = -ENXIO; > > - goto out; > > + mutex_unlock(&loop_ctl_mutex); > > + return -ENXIO; > > } > > - > > - atomic_inc(&lo->lo_refcnt); > > -out: > > + err = mutex_lock_killable(&lo->lo_mutex); > > mutex_unlock(&loop_ctl_mutex); > > I don't see a possibility for deadlock but it bothers me a little that > we're not unlocking in the reverse locking order here, as we do in > loop_control_ioctl(). There should be no perf impact if we move the > mutex_unlock(&loop_ctl_mutex) after mutex_unlock(&lo->lo_mutex). The lo_open() was one of the top functions that showed up in contention profiling, and the only shared data that it updates is lo_recnt which can be protected by lo_mutex. We must have loop_ctl_mutex in order to get a valid lo pointer, otherwise we could race with loop_control_ioctl(LOOP_CTL_REMOVE). Unlocking in a different order is not an issue, as long as we always preserve the locking order. > > @@ -2157,6 +2158,7 @@ static int loop_add(struct loop_device **l, int i) > > disk->flags |= GENHD_FL_NO_PART_SCAN; > > disk->flags |= GENHD_FL_EXT_DEVT; > > atomic_set(&lo->lo_refcnt, 0); > > + mutex_init(&lo->lo_mutex); > > We need a corresponding call to mutex_destroy() in loop_remove(). Yes, thank you for catching this. > > +++ b/drivers/block/loop.h > > @@ -62,6 +62,7 @@ struct loop_device { > > struct request_queue *lo_queue; > > struct blk_mq_tag_set tag_set; > > struct gendisk *lo_disk; > > There's an instance, which is not in this patch's context, of accessing > lo_disk that needs lo_mutex protection. In loop_probe(), we call > get_disk_and_module(lo->lo_disk) and we need to lock and unlock lo_mutex > around that call. I will add it. Thank you, Pasha