[PATCH 1/2] scsi: core: avoid to pre-allocate big chunk for protection meta data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Now scsi_mq_setup_tags() pre-allocates a big buffer for protection
sg list, and the buffer size is scsi_mq_sgl_size().

This way isn't correct, scsi_mq_sgl_size() is used to pre-allocate
sg list for IO data. And the protection data buffer is much less,
for example, one 512byte sector needs 8byte protection data, and
the max sector number for one request is 2560(BLK_DEF_MAX_SECTORS),
so the max protection data size is just 20k.

The usual case is that one bio builds one single bip segment. Attribute
to bio split, bio merge is seldom done for big IO, and it is only done
in case of small bios. And the bip segment number is usually same with
bio count in the request, so the number won't be very big, and
allocating from slab should be fast enough.

Reduce to pre-alocate one sg entry for protection data, and switches
to runtime allocation from slab in case that the protection data
segment number is bigger than 1. So we can save huge pre-alocation
for protection data, for example, 500+MB can be saved on lpfc.

Cc: Christoph Hellwig <hch@xxxxxx>
Cc: Bart Van Assche <bvanassche@xxxxxxx>
Cc: Ewan D. Milne <emilne@xxxxxxxxxx>
Cc: Hannes Reinecke <hare@xxxxxxxx>
Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx>
---
 drivers/scsi/scsi_lib.c | 30 ++++++++++++++++++++++++------
 1 file changed, 24 insertions(+), 6 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 07dfc17d4824..bdcf40851356 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -39,6 +39,12 @@
 #include "scsi_priv.h"
 #include "scsi_logging.h"
 
+/*
+ * Size of integrity meta data size is usually small, 1 inline sg
+ * should cover normal cases.
+ */
+#define  SCSI_INLINE_PROT_SG_CNT  1
+
 static struct kmem_cache *scsi_sdb_cache;
 static struct kmem_cache *scsi_sense_cache;
 static struct kmem_cache *scsi_sense_isadma_cache;
@@ -553,12 +559,21 @@ static void scsi_uninit_cmd(struct scsi_cmnd *cmd)
 	}
 }
 
+static inline bool scsi_prot_use_inline_sg(struct scsi_cmnd *cmd)
+{
+	if (!scsi_prot_sglist(cmd))
+		return false;
+
+	return cmd->prot_sdb->table.sgl ==
+		(struct scatterlist *)(cmd->prot_sdb + 1);
+}
+
 static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd)
 {
 	if (cmd->sdb.table.nents)
 		sg_free_table_chained(&cmd->sdb.table, true);
-	if (scsi_prot_sg_count(cmd))
-		sg_free_table_chained(&cmd->prot_sdb->table, true);
+	if (scsi_prot_sg_count(cmd) && !scsi_prot_use_inline_sg(cmd))
+		sg_free_table_chained(&cmd->prot_sdb->table, false);
 }
 
 static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd)
@@ -1044,9 +1059,11 @@ blk_status_t scsi_init_io(struct scsi_cmnd *cmd)
 		}
 
 		ivecs = blk_rq_count_integrity_sg(rq->q, rq->bio);
-
-		if (sg_alloc_table_chained(&prot_sdb->table, ivecs,
-				prot_sdb->table.sgl)) {
+		if (ivecs <= SCSI_INLINE_PROT_SG_CNT)
+			prot_sdb->table.nents = prot_sdb->table.orig_nents =
+				SCSI_INLINE_PROT_SG_CNT;
+		else if (sg_alloc_table_chained(&prot_sdb->table, ivecs,
+					NULL)) {
 			ret = BLK_STS_RESOURCE;
 			goto out_free_sgtables;
 		}
@@ -1846,7 +1863,8 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
 	sgl_size = scsi_mq_sgl_size(shost);
 	cmd_size = sizeof(struct scsi_cmnd) + shost->hostt->cmd_size + sgl_size;
 	if (scsi_host_get_prot(shost))
-		cmd_size += sizeof(struct scsi_data_buffer) + sgl_size;
+		cmd_size += sizeof(struct scsi_data_buffer) +
+			sizeof(struct scatterlist) * SCSI_INLINE_PROT_SG_CNT;
 
 	memset(&shost->tag_set, 0, sizeof(shost->tag_set));
 	shost->tag_set.ops = &scsi_mq_ops;
-- 
2.9.5




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux