Hi all,
I've noticed that virtually everybody assumes that the length of the
sense buffer is SCSI_SENSE_BUFFERSIZE.
And consequently every caller uses SCSI_SENSE_BUFFERSIZE when
calling any of the sense processing functions like
scsi_normalize_sense() etc.
However, those processing functions to go into great detail handling
_shorter_ buffersizes, which of course will never happen.
On the other hand, most LLDDs are capable of detecting the actually
transmitted sense buffer sizes; there even is a field in struct
request which would allow us to transmit the number of bytes in the
sense buffer.
But this field is happily neglected, too.
So, questions here:
- Do we actually care for a sense buffer underflow?
We won't be able to detect it currently anyway;
plus we haven't received any error reports for it.
And the allocated sense buffer is _always_ SCSI_SENSE_BUFFERSIZE ...
- What is the intention of rq->sense_len?
From what I gather it _should_ hold the size of the received sense
buffer data. But that doesn't happen currently.
For short sense buffers that doesn't matter, but for sense buffer
_overruns_ it would be quite good to know.
Which causes quite some confusion on how to probe for valid sense data:
- DRIVER_SENSE ?
- rq->sense_len?
- scsi_normalize_sense?
Thanks for clarification.
Cheers,
Hannes
--
Dr. Hannes Reinecke zSeries & Storage
hare@xxxxxxx +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html