Re: [PATCH 1/2] block: avoid to hold q->limits_lock across APIs for atomic update queue limits

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/19/24 11:50, Christoph Hellwig wrote:
> On Wed, Dec 18, 2024 at 06:57:45AM -0800, Damien Le Moal wrote:
>>> Yeah agreed but I see sd_revalidate_disk() is probably the only exception 
>>> which allocates the blk-mq request. Can't we fix it? 
>>
>> If we change where limits_lock is taken now, we will again introduce races
>> between user config and discovery/revalidation, which is what
>> queue_limits_start_update() and queue_limits_commit_update() intended to fix in
>> the first place.
>>
>> So changing sd_revalidate_disk() is not the right approach.
> 
> Well, sd_revalidate_disk is a bit special in that it needs a command
> on the same queue to query the information.  So it needs to be able
> to issue commands without the queue frozen.  Freezing the queue inside
> the limits lock support that, sd just can't use the convenience helpers
> that lock and freeze.
> 
>> This is overly complicated ... As I suggested, I think that a simpler approach
>> is to call blk_mq_freeze_queue() and blk_mq_unfreeze_queue() inside
>> queue_limits_commit_update(). Doing so, no driver should need to directly call
>> freeze/unfreeze. But that would be a cleanup. Let's first fix the few instances
>> that have the update/freeze order wrong. As mentioned, the pattern simply needs
> 
> Yes, the queue only needs to be frozen for the actual update,
> which would remove the need for the locking.  The big question for both
> variants is if we can get rid of all the callers that have the queue
> already frozen and then start an update.
> 
After thinking for a while I found that broadly we've four categories of users
which need this pattern of limits-lock and/or queue-freeze:

1. Callers which need acquiring limits-lock while starting the update; and freezing 
   queue only when committing the update:
   - sd_revalidate_disk
   - nvme_init_identify
   - loop_clear_limits
   - few more...

2. Callers which need both freezing the queue and acquiring limits-lock while starting
   the update:
   - nvme_update_ns_info_block
   - nvme_update_ns_info_generic
   - few more... 

3. Callers which neither need acquiring limits-lock nor require freezing queue as for 
   these set of callers in the call stack limits-lock is already acquired and queue is 
   already frozen:
   - __blk_mq_update_nr_hw_queues
   - queue_xxx_store and helpers

4. Callers which only need acquiring limits-lock; freezing queue may not be needed
   for such callers even while committing update:
   - scsi_add_lun
   - nvme_init_identify
   - few more...

IMO, we may covert category #4 users into category #1, as it may not harm even if 
we momentarily freeze the queue while committing the update. 

Then, for each of the above category we may define helpers show below:

// For category-3:

static inline struct queue_limits
get_queue_limits(struct request_queue *q)
{
	return q->limits;
}
int set_queue_limits(struct request_queue *q,
		struct queue_limits *lim)
{
	int error;

	error = blk_validate_limits(lim);
        ...
        ...
	q->limits = *lim;
	if (q->disk)
		blk_apply_bdi_limits(q->disk->bdi, lim);

	return error;
}

// For category-1:

static inline struct queue_limits
__queue_limits_start_update(struct request_queue *q)
{
	mutex_lock(&q->limits_lock);
	return q->limits;
}
int __queue_limits_commit_update(struct request_queue *q,
		struct queue_limits *lim)
{
	int error;

	blk_mq_freeze_queue(q);
	error = set_queue_limits(q, lim);
	blk_mq_unfreeze_queue(q);
	mutex_unlock(&q->limits_lock);

	return error;
}

// For category-2 :
static inline struct queue_limits
queue_limits_start_update(struct request_queue *q)
{
	mutex_lock(&q->limits_lock);
	blk_mq_freeze_queue(q);
	return q->limits;
}
int queue_limits_commit_update(struct request_queue *q,
		struct queue_limits *lim)
{
	int error;

	error = set_queue_limits(q, lim);
	blk_mq_unfreeze_queue(q);
	mutex_unlock(&q->limits_lock);

	return error;
}

With above helpers, I updated each caller based on in which category it fits in. 
For reference, attached the full diff. With this change, I ran blktests to ensure
that we don't see any lockdep splat or failures. 

Thanks,
--Nilay 
diff --git a/block/blk-integrity.c b/block/blk-integrity.c
index b180cac61a9d..6d5f3664bb91 100644
--- a/block/blk-integrity.c
+++ b/block/blk-integrity.c
@@ -212,15 +212,13 @@ static ssize_t flag_store(struct device *dev, const char *page, size_t count,
 		return err;
 
 	/* note that the flags are inverted vs the values in the sysfs files */
-	lim = queue_limits_start_update(q);
+	lim = __queue_limits_start_update(q);
 	if (val)
 		lim.integrity.flags &= ~flag;
 	else
 		lim.integrity.flags |= flag;
 
-	blk_mq_freeze_queue(q);
-	err = queue_limits_commit_update(q, &lim);
-	blk_mq_unfreeze_queue(q);
+	err = __queue_limits_commit_update(q, &lim);
 	if (err)
 		return err;
 	return count;
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6b6111513986..0d9efe543b41 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4994,6 +4994,7 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
 	list_for_each_entry(q, &set->tag_list, tag_set_list) {
 		mutex_lock(&q->sysfs_dir_lock);
 		mutex_lock(&q->sysfs_lock);
+		mutex_lock(&q->limits_lock);
 		blk_mq_freeze_queue(q);
 	}
 	/*
@@ -5031,12 +5032,12 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
 			set->nr_hw_queues = prev_nr_hw_queues;
 			goto fallback;
 		}
-		lim = queue_limits_start_update(q);
+		lim = get_queue_limits(q);
 		if (blk_mq_can_poll(set))
 			lim.features |= BLK_FEAT_POLL;
 		else
 			lim.features &= ~BLK_FEAT_POLL;
-		if (queue_limits_commit_update(q, &lim) < 0)
+		if (set_queue_limits(q, &lim) < 0)
 			pr_warn("updating the poll flag failed\n");
 		blk_mq_map_swqueue(q);
 	}
@@ -5053,6 +5054,7 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
 
 	list_for_each_entry(q, &set->tag_list, tag_set_list) {
 		blk_mq_unfreeze_queue(q);
+		mutex_unlock(&q->limits_lock);
 		mutex_unlock(&q->sysfs_lock);
 		mutex_unlock(&q->sysfs_dir_lock);
 	}
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 8f09e33f41f6..ca50bede1fb5 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -406,43 +406,75 @@ int blk_set_default_limits(struct queue_limits *lim)
 	lim->max_user_discard_sectors = UINT_MAX;
 	return blk_validate_limits(lim);
 }
-
-/**
- * queue_limits_commit_update - commit an atomic update of queue limits
- * @q:		queue to update
- * @lim:	limits to apply
- *
- * Apply the limits in @lim that were obtained from queue_limits_start_update()
- * and updated by the caller to @q.
- *
- * Returns 0 if successful, else a negative error code.
+/*
+ * Non atomic version of commiting queue limits. For atomicity, it's the caller
+ * responsibility to ensure that ->limits_lock has been acquired and queue has
+ * been frozen before calling this API.  Please also see queue_limits_commit_
+ * update() and __queue_limits_commit_update().
  */
-int queue_limits_commit_update(struct request_queue *q,
+int set_queue_limits(struct request_queue *q,
 		struct queue_limits *lim)
 {
 	int error;
 
 	error = blk_validate_limits(lim);
 	if (error)
-		goto out_unlock;
+		return error;
 
 #ifdef CONFIG_BLK_INLINE_ENCRYPTION
 	if (q->crypto_profile && lim->integrity.tag_size) {
 		pr_warn("blk-integrity: Integrity and hardware inline encryption are not supported together.\n");
-		error = -EINVAL;
-		goto out_unlock;
+		return -EINVAL;
 	}
 #endif
 
 	q->limits = *lim;
 	if (q->disk)
 		blk_apply_bdi_limits(q->disk->bdi, lim);
-out_unlock:
+
+	return error;
+}
+EXPORT_SYMBOL_GPL(set_queue_limits);
+/**
+ * queue_limits_commit_update - commit an atomic update of queue limits
+ * @q:		queue to update
+ * @lim:	limits to apply
+ *
+ * Apply the limits in @lim that were obtained from queue_limits_start_update()
+ * and updated by the caller to @q.
+ *
+ * Returns 0 if successful, else a negative error code.
+ */
+int queue_limits_commit_update(struct request_queue *q,
+		struct queue_limits *lim)
+{
+	int error;
+
+	error = set_queue_limits(q, lim);
+	blk_mq_unfreeze_queue(q);
 	mutex_unlock(&q->limits_lock);
+
 	return error;
 }
 EXPORT_SYMBOL_GPL(queue_limits_commit_update);
+/*
+ * Same as queue_limits_commit_update but it first freezes queue before setting
+ * the limits. It goes hand in hand with __queue_limits_start_update().
+ * Please also see  __queue_limits_start_update().
+ */
+int __queue_limits_commit_update(struct request_queue *q,
+		struct queue_limits *lim)
+{
+	int error;
+
+	blk_mq_freeze_queue(q);
+	error = set_queue_limits(q, lim);
+	blk_mq_unfreeze_queue(q);
+	mutex_unlock(&q->limits_lock);
 
+	return error;
+}
+EXPORT_SYMBOL_GPL(__queue_limits_commit_update);
 /**
  * queue_limits_set - apply queue limits to queue
  * @q:		queue to update
@@ -456,6 +488,7 @@ EXPORT_SYMBOL_GPL(queue_limits_commit_update);
  */
 int queue_limits_set(struct request_queue *q, struct queue_limits *lim)
 {
+	blk_mq_freeze_queue(q);
 	mutex_lock(&q->limits_lock);
 	return queue_limits_commit_update(q, lim);
 }
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 64f70c713d2f..28eaff3756a1 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -171,9 +171,9 @@ static ssize_t queue_max_discard_sectors_store(struct gendisk *disk,
 	if ((max_discard_bytes >> SECTOR_SHIFT) > UINT_MAX)
 		return -EINVAL;
 
-	lim = queue_limits_start_update(disk->queue);
+	lim = get_queue_limits(disk->queue);
 	lim.max_user_discard_sectors = max_discard_bytes >> SECTOR_SHIFT;
-	err = queue_limits_commit_update(disk->queue, &lim);
+	err = set_queue_limits(disk->queue, &lim);
 	if (err)
 		return err;
 	return ret;
@@ -191,9 +191,9 @@ queue_max_sectors_store(struct gendisk *disk, const char *page, size_t count)
 	if (ret < 0)
 		return ret;
 
-	lim = queue_limits_start_update(disk->queue);
+	lim = get_queue_limits(disk->queue);
 	lim.max_user_sectors = max_sectors_kb << 1;
-	err = queue_limits_commit_update(disk->queue, &lim);
+	err = set_queue_limits(disk->queue, &lim);
 	if (err)
 		return err;
 	return ret;
@@ -210,12 +210,12 @@ static ssize_t queue_feature_store(struct gendisk *disk, const char *page,
 	if (ret < 0)
 		return ret;
 
-	lim = queue_limits_start_update(disk->queue);
+	lim = get_queue_limits(disk->queue);
 	if (val)
 		lim.features |= feature;
 	else
 		lim.features &= ~feature;
-	ret = queue_limits_commit_update(disk->queue, &lim);
+	ret = set_queue_limits(disk->queue, &lim);
 	if (ret)
 		return ret;
 	return count;
@@ -277,13 +277,13 @@ static ssize_t queue_iostats_passthrough_store(struct gendisk *disk,
 	if (ret < 0)
 		return ret;
 
-	lim = queue_limits_start_update(disk->queue);
+	lim = get_queue_limits(disk->queue);
 	if (ios)
 		lim.flags |= BLK_FLAG_IOSTATS_PASSTHROUGH;
 	else
 		lim.flags &= ~BLK_FLAG_IOSTATS_PASSTHROUGH;
 
-	ret = queue_limits_commit_update(disk->queue, &lim);
+	ret = set_queue_limits(disk->queue, &lim);
 	if (ret)
 		return ret;
 
@@ -407,12 +407,12 @@ static ssize_t queue_wc_store(struct gendisk *disk, const char *page,
 		return -EINVAL;
 	}
 
-	lim = queue_limits_start_update(disk->queue);
+	lim = get_queue_limits(disk->queue);
 	if (disable)
 		lim.flags |= BLK_FLAG_WRITE_CACHE_DISABLED;
 	else
 		lim.flags &= ~BLK_FLAG_WRITE_CACHE_DISABLED;
-	err = queue_limits_commit_update(disk->queue, &lim);
+	err = set_queue_limits(disk->queue, &lim);
 	if (err)
 		return err;
 	return count;
@@ -707,10 +707,15 @@ queue_attr_store(struct kobject *kobj, struct attribute *attr,
 		entry->load_module(disk, page, length);
 
 	mutex_lock(&q->sysfs_lock);
+	mutex_lock(&q->limits_lock);
 	blk_mq_freeze_queue(q);
+
 	res = entry->store(disk, page, length);
+
 	blk_mq_unfreeze_queue(q);
+	mutex_unlock(&q->limits_lock);
 	mutex_unlock(&q->sysfs_lock);
+
 	return res;
 }
 
diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 84da1eadff64..366704b6e2a2 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -1459,7 +1459,7 @@ static int disk_update_zone_resources(struct gendisk *disk,
 		return -ENODEV;
 	}
 
-	lim = queue_limits_start_update(q);
+	lim = __queue_limits_start_update(q);
 
 	/*
 	 * Some devices can advertize zone resource limits that are larger than
@@ -1497,9 +1497,7 @@ static int disk_update_zone_resources(struct gendisk *disk,
 	}
 
 commit:
-	blk_mq_freeze_queue(q);
-	ret = queue_limits_commit_update(q, &lim);
-	blk_mq_unfreeze_queue(q);
+	ret = __queue_limits_commit_update(q, &lim);
 
 	return ret;
 }
diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index 720fc30e2ecc..7c132f748429 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -1290,7 +1290,7 @@ void drbd_reconsider_queue_parameters(struct drbd_device *device,
 		drbd_info(device, "max BIO size = %u\n", new);
 	}
 
-	lim = queue_limits_start_update(q);
+	lim = __queue_limits_start_update(q);
 	if (bdev) {
 		blk_set_stacking_limits(&lim);
 		lim.max_segments = drbd_backing_dev_max_segments(device);
@@ -1337,7 +1337,7 @@ void drbd_reconsider_queue_parameters(struct drbd_device *device,
 		lim.max_hw_discard_sectors = 0;
 	}
 
-	if (queue_limits_commit_update(q, &lim))
+	if (__queue_limits_commit_update(q, &lim))
 		drbd_err(device, "setting new queue limits failed\n");
 }
 
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 8f6761c27c68..b443b7092158 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -301,7 +301,7 @@ static int lo_read_simple(struct loop_device *lo, struct request *rq,
 
 static void loop_clear_limits(struct loop_device *lo, int mode)
 {
-	struct queue_limits lim = queue_limits_start_update(lo->lo_queue);
+	struct queue_limits lim = __queue_limits_start_update(lo->lo_queue);
 
 	if (mode & FALLOC_FL_ZERO_RANGE)
 		lim.max_write_zeroes_sectors = 0;
@@ -311,7 +311,7 @@ static void loop_clear_limits(struct loop_device *lo, int mode)
 		lim.discard_granularity = 0;
 	}
 
-	queue_limits_commit_update(lo->lo_queue, &lim);
+	__queue_limits_commit_update(lo->lo_queue, &lim);
 }
 
 static int lo_fallocate(struct loop_device *lo, struct request *rq, loff_t pos,
@@ -995,7 +995,7 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned int bsize)
 
 	loop_get_discard_config(lo, &granularity, &max_discard_sectors);
 
-	lim = queue_limits_start_update(lo->lo_queue);
+	lim = __queue_limits_start_update(lo->lo_queue);
 	lim.logical_block_size = bsize;
 	lim.physical_block_size = bsize;
 	lim.io_min = bsize;
@@ -1010,7 +1010,7 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned int bsize)
 		lim.discard_granularity = granularity;
 	else
 		lim.discard_granularity = 0;
-	return queue_limits_commit_update(lo->lo_queue, &lim);
+	return __queue_limits_commit_update(lo->lo_queue, &lim);
 }
 
 static int loop_configure(struct loop_device *lo, blk_mode_t mode,
@@ -1151,11 +1151,11 @@ static void __loop_clr_fd(struct loop_device *lo)
 	memset(lo->lo_file_name, 0, LO_NAME_SIZE);
 
 	/* reset the block size to the default */
-	lim = queue_limits_start_update(lo->lo_queue);
+	lim = __queue_limits_start_update(lo->lo_queue);
 	lim.logical_block_size = SECTOR_SIZE;
 	lim.physical_block_size = SECTOR_SIZE;
 	lim.io_min = SECTOR_SIZE;
-	queue_limits_commit_update(lo->lo_queue, &lim);
+	__queue_limits_commit_update(lo->lo_queue, &lim);
 
 	invalidate_disk(lo->lo_disk);
 	loop_sysfs_exit(lo);
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index b852050d8a96..a73f11f0a2f5 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -348,7 +348,7 @@ static int __nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 	if (!nbd->pid)
 		return 0;
 
-	lim = queue_limits_start_update(nbd->disk->queue);
+	lim = __queue_limits_start_update(nbd->disk->queue);
 	if (nbd->config->flags & NBD_FLAG_SEND_TRIM)
 		lim.max_hw_discard_sectors = UINT_MAX >> SECTOR_SHIFT;
 	else
@@ -368,7 +368,7 @@ static int __nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 
 	lim.logical_block_size = blksize;
 	lim.physical_block_size = blksize;
-	error = queue_limits_commit_update(nbd->disk->queue, &lim);
+	error = __queue_limits_commit_update(nbd->disk->queue, &lim);
 	if (error)
 		return error;
 
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 3efe378f1386..cb4ca7dcce26 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -1101,14 +1101,12 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
 
 	virtio_cwrite8(vdev, offsetof(struct virtio_blk_config, wce), i);
 
-	lim = queue_limits_start_update(disk->queue);
+	lim = __queue_limits_start_update(disk->queue);
 	if (virtblk_get_cache_mode(vdev))
 		lim.features |= BLK_FEAT_WRITE_CACHE;
 	else
 		lim.features &= ~BLK_FEAT_WRITE_CACHE;
-	blk_mq_freeze_queue(disk->queue);
-	i = queue_limits_commit_update(disk->queue, &lim);
-	blk_mq_unfreeze_queue(disk->queue);
+	i = __queue_limits_commit_update(disk->queue, &lim);
 	if (i)
 		return i;
 	return count;
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 59ce113b882a..b802a17abaef 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2013,10 +2013,10 @@ static int blkif_recover(struct blkfront_info *info)
 	struct bio *bio;
 	struct blkfront_ring_info *rinfo;
 
-	lim = queue_limits_start_update(info->rq);
+	lim = __queue_limits_start_update(info->rq);
 	blkfront_gather_backend_features(info);
 	blkif_set_queue_limits(info, &lim);
-	rc = queue_limits_commit_update(info->rq, &lim);
+	rc = __queue_limits_commit_update(info->rq, &lim);
 	if (rc)
 		return rc;
 
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index df4cc8a27385..b62bcc71c76c 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2128,13 +2128,10 @@ static int nvme_update_ns_info_generic(struct nvme_ns *ns,
 	struct queue_limits lim;
 	int ret;
 
-	blk_mq_freeze_queue(ns->disk->queue);
 	lim = queue_limits_start_update(ns->disk->queue);
 	nvme_set_ctrl_limits(ns->ctrl, &lim);
-	ret = queue_limits_commit_update(ns->disk->queue, &lim);
 	set_disk_ro(ns->disk, nvme_ns_is_readonly(ns, info));
-	blk_mq_unfreeze_queue(ns->disk->queue);
-
+	ret = queue_limits_commit_update(ns->disk->queue, &lim);
 	/* Hide the block-interface for these devices */
 	if (!ret)
 		ret = -ENODEV;
@@ -2177,12 +2174,11 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
 			goto out;
 	}
 
-	blk_mq_freeze_queue(ns->disk->queue);
+	lim = queue_limits_start_update(ns->disk->queue);
 	ns->head->lba_shift = id->lbaf[lbaf].ds;
 	ns->head->nuse = le64_to_cpu(id->nuse);
 	capacity = nvme_lba_to_sect(ns->head, le64_to_cpu(id->nsze));
 
-	lim = queue_limits_start_update(ns->disk->queue);
 	nvme_set_ctrl_limits(ns->ctrl, &lim);
 	nvme_configure_metadata(ns->ctrl, ns->head, id, nvm, info);
 	nvme_set_chunk_sectors(ns, id, &lim);
@@ -2210,12 +2206,6 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
 	if (!nvme_init_integrity(ns->head, &lim, info))
 		capacity = 0;
 
-	ret = queue_limits_commit_update(ns->disk->queue, &lim);
-	if (ret) {
-		blk_mq_unfreeze_queue(ns->disk->queue);
-		goto out;
-	}
-
 	set_capacity_and_notify(ns->disk, capacity);
 
 	/*
@@ -2228,7 +2218,9 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
 		ns->head->features |= NVME_NS_DEAC;
 	set_disk_ro(ns->disk, nvme_ns_is_readonly(ns, info));
 	set_bit(NVME_NS_READY, &ns->flags);
-	blk_mq_unfreeze_queue(ns->disk->queue);
+	ret = queue_limits_commit_update(ns->disk->queue, &lim);
+	if (ret)
+		goto out;
 
 	if (blk_queue_is_zoned(ns->queue)) {
 		ret = blk_revalidate_disk_zones(ns->disk);
@@ -2285,7 +2277,7 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_ns_info *info)
 		struct queue_limits *ns_lim = &ns->disk->queue->limits;
 		struct queue_limits lim;
 
-		blk_mq_freeze_queue(ns->head->disk->queue);
+		lim = queue_limits_start_update(ns->head->disk->queue);
 		/*
 		 * queue_limits mixes values that are the hardware limitations
 		 * for bio splitting with what is the device configuration.
@@ -2301,7 +2293,6 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_ns_info *info)
 		 * the splitting limits in to make sure we still obey possibly
 		 * lower limitations of other controllers.
 		 */
-		lim = queue_limits_start_update(ns->head->disk->queue);
 		lim.logical_block_size = ns_lim->logical_block_size;
 		lim.physical_block_size = ns_lim->physical_block_size;
 		lim.io_min = ns_lim->io_min;
@@ -2312,13 +2303,12 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_ns_info *info)
 			ns->head->disk->flags |= GENHD_FL_HIDDEN;
 		else
 			nvme_init_integrity(ns->head, &lim, info);
-		ret = queue_limits_commit_update(ns->head->disk->queue, &lim);
 
 		set_capacity_and_notify(ns->head->disk, get_capacity(ns->disk));
 		set_disk_ro(ns->head->disk, nvme_ns_is_readonly(ns, info));
 		nvme_mpath_revalidate_paths(ns);
 
-		blk_mq_unfreeze_queue(ns->head->disk->queue);
+		ret = queue_limits_commit_update(ns->head->disk->queue, &lim);
 	}
 
 	return ret;
@@ -3338,9 +3328,9 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
 	ctrl->max_hw_sectors =
 		min_not_zero(ctrl->max_hw_sectors, max_hw_sectors);
 
-	lim = queue_limits_start_update(ctrl->admin_q);
+	lim = __queue_limits_start_update(ctrl->admin_q);
 	nvme_set_ctrl_limits(ctrl, &lim);
-	ret = queue_limits_commit_update(ctrl->admin_q, &lim);
+	ret = __queue_limits_commit_update(ctrl->admin_q, &lim);
 	if (ret)
 		goto out_free;
 
diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
index 1bef88130d0c..62ca096daad1 100644
--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
+++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
@@ -1064,9 +1064,9 @@ mpi3mr_update_sdev(struct scsi_device *sdev, void *data)
 
 	mpi3mr_change_queue_depth(sdev, tgtdev->q_depth);
 
-	lim = queue_limits_start_update(sdev->request_queue);
+	lim = __queue_limits_start_update(sdev->request_queue);
 	mpi3mr_configure_tgt_dev(tgtdev, &lim);
-	WARN_ON_ONCE(queue_limits_commit_update(sdev->request_queue, &lim));
+	WARN_ON_ONCE(__queue_limits_commit_update(sdev->request_queue, &lim));
 }
 
 /**
diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
index 042329b74c6e..18d76d852cbe 100644
--- a/drivers/scsi/scsi_scan.c
+++ b/drivers/scsi/scsi_scan.c
@@ -1068,7 +1068,7 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
 	/*
 	 * No need to freeze the queue as it isn't reachable to anyone else yet.
 	 */
-	lim = queue_limits_start_update(sdev->request_queue);
+	lim = __queue_limits_start_update(sdev->request_queue);
 	if (*bflags & BLIST_MAX_512)
 		lim.max_hw_sectors = 512;
 	else if (*bflags & BLIST_MAX_1024)
@@ -1090,7 +1090,7 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
 		return SCSI_SCAN_NO_RESPONSE;
 	}
 
-	ret = queue_limits_commit_update(sdev->request_queue, &lim);
+	ret = __queue_limits_commit_update(sdev->request_queue, &lim);
 	if (ret) {
 		sdev_printk(KERN_ERR, sdev, "failed to apply queue limits.\n");
 		return SCSI_SCAN_NO_RESPONSE;
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 8947dab132d7..9bf2d75cf37b 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -175,10 +175,9 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
 		sdkp->WCE = wce;
 		sdkp->RCD = rcd;
 
-		lim = queue_limits_start_update(sdkp->disk->queue);
+		lim = __queue_limits_start_update(sdkp->disk->queue);
 		sd_set_flush_flag(sdkp, &lim);
-		blk_mq_freeze_queue(sdkp->disk->queue);
-		ret = queue_limits_commit_update(sdkp->disk->queue, &lim);
+		ret = __queue_limits_commit_update(sdkp->disk->queue, &lim);
 		blk_mq_unfreeze_queue(sdkp->disk->queue);
 		if (ret)
 			return ret;
@@ -481,11 +480,9 @@ provisioning_mode_store(struct device *dev, struct device_attribute *attr,
 	if (mode < 0)
 		return -EINVAL;
 
-	lim = queue_limits_start_update(sdkp->disk->queue);
+	lim = __queue_limits_start_update(sdkp->disk->queue);
 	sd_config_discard(sdkp, &lim, mode);
-	blk_mq_freeze_queue(sdkp->disk->queue);
-	err = queue_limits_commit_update(sdkp->disk->queue, &lim);
-	blk_mq_unfreeze_queue(sdkp->disk->queue);
+	err = __queue_limits_commit_update(sdkp->disk->queue, &lim);
 	if (err)
 		return err;
 	return count;
@@ -592,11 +589,9 @@ max_write_same_blocks_store(struct device *dev, struct device_attribute *attr,
 		sdkp->max_ws_blocks = max;
 	}
 
-	lim = queue_limits_start_update(sdkp->disk->queue);
+	lim = __queue_limits_start_update(sdkp->disk->queue);
 	sd_config_write_same(sdkp, &lim);
-	blk_mq_freeze_queue(sdkp->disk->queue);
-	err = queue_limits_commit_update(sdkp->disk->queue, &lim);
-	blk_mq_unfreeze_queue(sdkp->disk->queue);
+	err = __queue_limits_commit_update(sdkp->disk->queue, &lim);
 	if (err)
 		return err;
 	return count;
@@ -3724,7 +3719,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 
 	sd_spinup_disk(sdkp);
 
-	lim = queue_limits_start_update(sdkp->disk->queue);
+	lim = __queue_limits_start_update(sdkp->disk->queue);
 
 	/*
 	 * Without media there is no reason to ask; moreover, some devices
@@ -3803,9 +3798,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	sd_config_write_same(sdkp, &lim);
 	kfree(buffer);
 
-	blk_mq_freeze_queue(sdkp->disk->queue);
-	err = queue_limits_commit_update(sdkp->disk->queue, &lim);
-	blk_mq_unfreeze_queue(sdkp->disk->queue);
+	err = __queue_limits_commit_update(sdkp->disk->queue, &lim);
 	if (err)
 		return err;
 
diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index 198bec87bb8e..de562a17cbd4 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -795,11 +795,9 @@ static int get_sectorsize(struct scsi_cd *cd)
 		set_capacity(cd->disk, cd->capacity);
 	}
 
-	lim = queue_limits_start_update(q);
+	lim = __queue_limits_start_update(q);
 	lim.logical_block_size = sector_size;
-	blk_mq_freeze_queue(q);
-	err = queue_limits_commit_update(q, &lim);
-	blk_mq_unfreeze_queue(q);
+	err = __queue_limits_commit_update(q, &lim);
 	return err;
 }
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 378d3a1a22fc..5218cc90937b 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -46,6 +46,8 @@ extern const struct device_type disk_type;
 extern const struct device_type part_type;
 extern const struct class block_class;
 
+extern void blk_mq_freeze_queue(struct request_queue *q);
+
 /*
  * Maximum number of blkcg policies allowed to be registered concurrently.
  * Defined here to simplify include dependency.
@@ -945,8 +947,36 @@ static inline struct queue_limits
 queue_limits_start_update(struct request_queue *q)
 {
 	mutex_lock(&q->limits_lock);
+	blk_mq_freeze_queue(q);
 	return q->limits;
 }
+/*
+ * Same as queue_limits_start_update but without freezing the queue. It's
+ * appropriate for callers who don't require freezing the queue while reading
+ * the queue limits. It goes hand in hand with __queue_limits_commit_update().
+ * Please also see __queue_limits_commit_update().
+ */
+static inline struct queue_limits
+__queue_limits_start_update(struct request_queue *q)
+{
+	mutex_lock(&q->limits_lock);
+	return q->limits;
+}
+/*
+ * Same as queue_limits_start_update() but without acquiring ->limits_lock
+ * and freezing the queue. It's assumed that caller has acquired ->limits_lock
+ * and frozen the queue before calling this function.
+ */
+static inline struct queue_limits
+get_queue_limits(struct request_queue *q)
+{
+	return q->limits;
+}
+
+int set_queue_limits(struct request_queue *q,
+		struct queue_limits *lim);
+int __queue_limits_commit_update(struct request_queue *q,
+		struct queue_limits *lim);
 int queue_limits_commit_update(struct request_queue *q,
 		struct queue_limits *lim);
 int queue_limits_set(struct request_queue *q, struct queue_limits *lim);

[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux