James Smart wrote:
The mid-layer queue depth handling is really designed/optimized around
behavior for
a JBOD. This, if it's a single-lun device, the LLDD could largely ignore
doing anything
with adjusting the queue depth.
However, for arrays, with multiple luns, the queue depth is usually a
target-level resource,
so the midlayer/block-layer's implementation falls on its face fairly
quickly. I brought this
up 2 yrs ago at storage summit. What needs to happen is the creation of
queue ramp-down
and ramp-up policies that can be selected on a per-lun basis, and have
these implemented
in the midlayer (why should the LLDD ever look at scsi command
results). What will make
this difficult is the ramp-up policies, as it can be very target
device-specific or configuration/load
centric.
For the rampup are you referring to code like lpfc_rampup_queue_depth?
Were were just talking about this on the fcoe list. Why did lpfc and
qla2xxx end up implememting their own code? We started to look into
moving this into the scsi layer. It does not seem like there was a major
reason why it should not have been more common. Was it just one of those
things where it got added in one driver then added in another?
If we moved code like that to the scsi layer, then is all the is needed
is a interface to config this?
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html