On 2/4/2020 6:56 PM, Martin K. Petersen wrote:
James,
There has been a desire to set the lun queue depth on all luns on an
shost. Today that is done by an external script looping through
discovered sdevs and set an sdev attribute. The desire is to have a
single shost attribute that performs this work removing the
requirement for scripting.
I'd like you to elaborate a bit on this.
- Why is scripting or adding a udev rule inadequate?
Simply put, admins don't want to create them, mod the system for them,
nor concern themselves with finding/changing each of the lun devices
that connect to a particular port. They have been comfortable changing
the driver's initial lun queue depth via module parameter. Given that
was at driver load, it changed everything at time of initial discovery
so it was fine. But, if the system won't boot for days/weeks, they want
to do something as simple as the module parameter and with only "1 echo
command". Thus we wanted to make the parameter be rw rather than ro.
However, the only interface available to the lldd was
scsi_change_queue_depth(), which changes the depth but does not change
the devices max value (which it does do if written via the per-device
attribute). So although the now writeable attribute allows the driver
value to change, it would only be applied to storage devices discovered
after the change. Existing devices would not have their max changed
unless the per-device attribute were changed. This new interface gave
the lldd a new routine, which would find all the devices and apply the
new max/value to the devices - as if the per-device attributes had been
called.
- Why is there a requirement to statically clamp the queue depth
instead of letting the device manage it?
You are misreading it, and perhaps my description led things astray. It
doesn't "clamp" it at a fixed/unchangable depth. It sets the max to a
new value and changes the current queue depth to that new value. These
are the same actions that the per-device attribute does if written to.
The management of queue depth depth beyond that point is the same as it
was - meaning queue fulls ramp it down, there is ramp up, and so on. So
it is the device managing it, just with perhaps a small blip if it
actually raised vs it's current level, or a pause if its current level
was higher and it drains down to the new levels.
-- james