Re: [PATCH] target: Remove unused se_cmd.cmd_spdtl

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2012-08-18 at 10:13 -0700, Roland Dreier wrote:
> On Fri, Aug 17, 2012 at 6:02 PM, Nicholas A. Bellinger
> <nab@xxxxxxxxxxxxxxx> wrote:
> > No, or at least that is not what happens anymore with current target
> > core + iscsi-target code..
> >
> > The se_cmd->data_length re-assignment here is what will be used by
> > iscsi-target fabric code for all iSCSI descriptor related allocations
> > +ransfer, instead of the original fabric 'expected transfer length' that
> > declares the size of the SCSI initiator's available buffer at the other
> > end.
> 
> Not sure I follow this.  Isn't cmd->data_length also what we use to
> allocate the buffer used to store the data we're going to transfer?
> 

se_cmd->data_length is initially set up by transport_init_se_cmd() based
upon the fabric provided expected transfer length.  It then gets
adjusted (before any allocations happen) based upon underflow..   So in
the overflow case w/ the proposed patch, we want to keep the original
se_cmd->data_length because it's smaller than the CDB provided
allocation length.

> I guess my example with READ(10) actually works, because
> iblock_execute_rw() just uses the buffer allocated, rather than
> paying attention to the sector count in the command.
> 

Correct, this is true for all SCF_SCSI_DATA_CDB with virtual backends,
they no longer look into the CDB.

> But what if an initiator sends, say, REPORT LUNS or PR OUT
> with an allocation length of 8192, but a transport-level length
> of only 4096?  If the REPORT LUNS or PR OUT response is
> bigger than 4096 bytes, we'll overflow the allocated buffer with
> your patch, right?
> 

With virtual backends + TCM control CDB emulation, the emulation code
should be using (the possibly adjusted) se_cmd->data_length as it's
upper limit for building a response payload, and not the possibly
overflowed allocation length it may extracted from the CDB.

So as long as the control CDB emulation code is doing the right thing
here by using se_cmd->data_length, we should be OK for the virtual
backend cases.  (Need to audit this)

Now for pSCSI that may not be the case because we do expect underlying
layers to look into the CDB at some point, but currently we are passing
in SGLs based from the adjusted se_cmd->data_length, so this might end
up causing a problem in SCSI..  (Need to test this)

Otherwise, I'm tempted to just reject all CDB overflow cases for pSCSI
backends, as every host SCSI stack I've seen in practice only ends up
generating underflow cases for control CDBs anyways..

--nab


--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux