On Thu, 30 Jul 2015, 10:01pm -0700, Nicholas A. Bellinger wrote:
On Tue, 2015-07-28 at 14:24 -0700, Arun Easi wrote:
Hi Nic,
Thanks for the response, my comments below..
On Sat, 25 Jul 2015, 2:02am -0700, Nicholas A. Bellinger wrote:
Hello Arun, Roland & Co,
Thanks for your feedback. Apologies for delayed response on this bug.
On Wed, 2015-07-08 at 13:25 -0700, Arun Easi wrote:
<SNIP>
Longer term (Roland alluded to this, I think), it would be best to have
stack call .queue_data_in/.write_pending with a max of "max_sg_entries" SG
entries at one time and repeat until the whole data is transferred; thus
achieving a possible unlimited data transfer length.
Here's a better work-around following your earlier comments to enforce
target-core level se_cmd->t_data_sg fabric driver limits, currently
assuming 1:1 PAGE_SIZE mapping per internally allocated scatterlist
entry.
The max_data_sg_nents=1200 default for tcm_qla2xxx is following Roland's
earlier comments.
Please review.
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index ce8574b..2c934fc 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -2254,9 +2254,9 @@ void transport_kunmap_data_sg(struct se_cmd *cmd)
}
EXPORT_SYMBOL(transport_kunmap_data_sg);
-int
+sense_reason_t
target_alloc_sgl(struct scatterlist **sgl, unsigned int *nents, u32 length,
- bool zero_page)
+ bool zero_page, u32 max_nents)
{
struct scatterlist *sg;
struct page *page;
@@ -2265,9 +2265,16 @@ target_alloc_sgl(struct scatterlist **sgl, unsigned int *nents, u32 length,
int i = 0;
nent = DIV_ROUND_UP(length, PAGE_SIZE);
+ if (max_nents && max_nents >= nent) {
+ pr_err_ratelimited("Required sg nents: %u exceeded fabric driver"
+ " enforced max_nents: %u, failing command\n",
+ nent, max_nents);
+ return TCM_SECTOR_COUNT_TOO_MANY;
How about setting a residual count of the difference and allowing IO to
succeed? Thus, initiators could still make forward progress without
worrying so much about the fabric limitation.
Residual count + overflow sounds like a much better work-around for this
problem.
How about something like the following in target_cmd_size_check()
to detect residual overflow early before internal SGL allocation occurs,
and reset se_cmd->data_length + ->prot_length following the fabric
provided max_data_sg_nents..?
WDYT..?
Thanks Nic, yes this is a much graceful approach. More comments below.
From 78b2faf26e6b51744294761d2e111a28391b4ad5 Mon Sep 17 00:00:00 2001
From: Nicholas Bellinger <nab@xxxxxxxxxxxxxxx>
Date: Thu, 30 Jul 2015 18:28:13 -0700
Subject: [PATCH] target: Add max_data_sg_nents I/O limit + residual overflow
This patch changes target-core to enforce optional fabric driver
provided max_data_sg_nents limit, that determines the value based
upon 1:1 mapping of PAGE_SIZE to scatterlist entry.
If a new se_cmd payload length exceeds the provided value, it will
recalculate ->data_length and ->prot_length based the limit, and
sets SCF_OVERFLOW_BIT + residual_count in the out-going response.
Reported-by: Craig Watson <craig.watson@xxxxxxxxxxxxxxxxxxx>
Cc: Craig Watson <craig.watson@xxxxxxxxxxxxxxxxxxx>
Cc: Roland Dreier <roland@xxxxxxxxxxxxxxx>
Cc: Arun Easi <arun.easi@xxxxxxxxxx>
Cc: Giridhar Malavali <giridhar.malavali@xxxxxxxxxx>
Cc: Andrew Vasquez <andrew.vasquez@xxxxxxxxxx>
Cc: Christoph Hellwig <hch@xxxxxx>
Cc: Hannes Reinecke <hare@xxxxxxx>
Cc: Martin K. Petersen <martin.petersen@xxxxxxxxxx>
Signed-off-by: Nicholas Bellinger <nab@xxxxxxxxxxxxxxx>
---
drivers/scsi/qla2xxx/tcm_qla2xxx.c | 5 +++++
drivers/target/target_core_spc.c | 13 ++++++++++---
drivers/target/target_core_transport.c | 23 +++++++++++++++++++++++
include/target/target_core_fabric.h | 13 +++++++++++++
4 files changed, 51 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
index 9224a06..6649809 100644
--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
@@ -1804,6 +1804,11 @@ static const struct target_core_fabric_ops tcm_qla2xxx_ops = {
.module = THIS_MODULE,
.name = "qla2xxx",
.node_acl_size = sizeof(struct tcm_qla2xxx_nacl),
+ /*
+ * XXX: Limit assumes single page per scatter-gather-list entry.
+ * Current maximum is ~4.9 MB per se_cmd->t_data_sg with PAGE_SIZE=4096
+ */
+ .max_data_sg_nents = 1200,
.get_fabric_name = tcm_qla2xxx_get_fabric_name,
.tpg_get_wwn = tcm_qla2xxx_get_fabric_wwn,
.tpg_get_tag = tcm_qla2xxx_get_tag,
diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
index b5ba1ec..ac249e9 100644
--- a/drivers/target/target_core_spc.c
+++ b/drivers/target/target_core_spc.c
@@ -484,8 +484,8 @@ static sense_reason_t
spc_emulate_evpd_b0(struct se_cmd *cmd, unsigned char *buf)
{
struct se_device *dev = cmd->se_dev;
- int have_tp = 0;
- int opt, min;
+ u32 mtl = 0;
+ int have_tp = 0, opt, min;
/*
* Following spc3r22 section 6.5.3 Block Limits VPD page, when
@@ -516,8 +516,15 @@ spc_emulate_evpd_b0(struct se_cmd *cmd, unsigned char *buf)
/*
* Set MAXIMUM TRANSFER LENGTH
+ *
+ * XXX: Currently assumes single PAGE_SIZE per scatterlist for fabrics
+ * enforcing maximum HW scatter-gather-list entry limit
*/
- put_unaligned_be32(dev->dev_attrib.hw_max_sectors, &buf[8]);
+ if (cmd->se_tfo->max_data_sg_nents) {
+ mtl = (cmd->se_tfo->max_data_sg_nents * PAGE_SIZE) /
+ dev->dev_attrib.block_size;
+ }
+ put_unaligned_be32(min_not_zero(mtl, dev->dev_attrib.hw_max_sectors), &buf[8]);
/*
* Set OPTIMAL TRANSFER LENGTH
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index ce8574b..7723f82 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1118,6 +1118,29 @@ target_cmd_size_check(struct se_cmd *cmd, unsigned int size)
cmd->data_length = size;
}
}
+ /*
+ * Check if fabric enforced maximum SGL entries per I/O descriptor
+ * exceeds se_cmd->data_length. If true, set SCF_OVERFLOW_BIT +
+ * residual_count and reduce original cmd->data_length to maximum
+ * length based on single PAGE_SIZE entry scatter-lists.
+ */
+ if (cmd->se_tfo->max_data_sg_nents) {
+ u32 mtl = (cmd->se_tfo->max_data_sg_nents * PAGE_SIZE);
+
+ if (cmd->data_length > mtl) {
+ cmd->se_cmd_flags |= SCF_OVERFLOW_BIT;
Should not this be SCF_UNDERFLOW_BIT, as target is sending/receiving less
data than requested?
+ cmd->residual_count = (mtl - cmd->data_length);
Should not this be "cmd->data_length - mtl"?
+ cmd->data_length = mtl;
+ /*
+ * Reset sbc_check_prot() calculated protection payload
+ * length based upon the new smaller MTL.
+ */
+ if (cmd->prot_length) {
+ u32 sectors = (mtl / dev->dev_attrib.block_size);
+ cmd->prot_length = dev->prot_length * sectors;
+ }
+ }
+ }
Looking at the function as a whole, I think, there needs to be more logic
added to this block to have this work correctly in the presence of the
previous block (size != cmd->data_length one). The flags OVERFLOW/
UNDERFLOW might have already set, so residual_count has to be adjusted
and flags modified, if needed.
BTW, though SBC says such a request (that exceeds EVPD-B0 max xfer length)
be terminated with check condition, I guess, this is a practical approach
to the issue and hopefully ok.
Regards,
-Arun
return 0;
diff --git a/include/target/target_core_fabric.h b/include/target/target_core_fabric.h
index 18afef9..995f2e1 100644
--- a/include/target/target_core_fabric.h
+++ b/include/target/target_core_fabric.h
@@ -5,6 +5,19 @@ struct target_core_fabric_ops {
struct module *module;
const char *name;
size_t node_acl_size;
+ /*
+ * Limits number of scatterlist entries per SCF_SCSI_DATA_CDB payload.
+ * Setting this value tells target-core to enforce this limit, and
+ * report as INQUIRY EVPD=b0 MAXIMUM TRANSFER LENGTH.
+ *
+ * target-core will currently reset se_cmd->data_length to this
+ * maximum size, and set OVERFLOW residual count if length exceeds
+ * this limit.
+ *
+ * XXX: Not all initiator hosts honor this block-limit EVPD
+ * XXX: Currently assumes single PAGE_SIZE per scatterlist entry
+ */
+ u32 max_data_sg_nents;
char *(*get_fabric_name)(void);
char *(*tpg_get_wwn)(struct se_portal_group *);
u16 (*tpg_get_tag)(struct se_portal_group *);
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html