struct scsi_cmnd --> underflow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am writing a SCSI LLD and have a query regarding handling of underflow errors.

In function sd_prep_fn (sd.c) the code populates scsi command pointer
(SCpnt) and it's cmnd.
Going ahead it also sets the total length to be transferred in the
scsi data buffer (SCpnt->sdb.length = this_count * sdp->sector_size).
Here 'this_count' looks to be the nr_sectors. This assignment looks
fine too.
But later when assigning the 'underflow' field of SCpnt, the
assignment uses a hard-coded bit shift (SCpnt->underflow = this_count
<< 9). Is this valid? OR
should it be: SCpnt->underflow = this_count <<  sdp->sector_size?

So, if the sector size exported by the target device is greater than
512 (4K in our case), the SCpnt->underflow is not same as
SCpnt->sdb.length. The later is 8 times bigger. So if I get a read of
say 8K the 'underflow' is set only to 1K. Now, if the target say
returns only 2K of data for this read, the amount of data transferred
is not less than residual (1K in this case). So is the LLD not
supposed to return (DID_ERROR << 16) in 'result' for this case
(assuming target returns result == GOOD)?

Please let me know.

thanks,
--shailesh
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux