On 2016-05-02 04:35 AM, Hannes Reinecke wrote:
On 05/01/2016 04:44 AM, Douglas Gilbert wrote:
Add submit_queue parameter (minimum and default: 1; maximum:
nr_cpu_ids) that controls how many queues are built, each with
their own lock and in_use bit vector. Add statistics parameter
which is default on.
Signed-off-by: Douglas Gilbert <dgilbert@xxxxxxxxxxxx>
---
drivers/scsi/scsi_debug.c | 680 +++++++++++++++++++++++++++++-----------------
1 file changed, 426 insertions(+), 254 deletions(-)
Two general questions for this:
- Why do you get rid of the embedded command payload?
I'm not sure what payload you a referring to. And this patch
only adds multiple queues, I can't see that it removes anything.
Where's the benefit of allocating the commands yourself?
The commands are either replied to "in thread" (e.g. when delay=0
or an error is detected), or queued on a hr timer or work item.
A pointer to the command is held in the queue (the same as before
this patch). The only allocations associated with commands are to
build data-in buffers for responses. (e.g. an INQUIRY command for
a VPD page).
- Wouldn't it be better to move to a per-cpu structure per queue?
Each queue will be tacked to a CPU anyway, so you could be using
per-cpu structures. Otherwise you'll run into synchronization
issues, and any performance gain you might get from scsi-mq is
lost as you to synchronize on the lower level.
I offer this patch as being necessary, but probably not sufficient
for implementing full scsi "mq". The interface itself for
scsi/block mq does not seem to be documented and is possibly in a
state of flux.
From testing this patch (e.g. by observing
"cat /proc/scsi/scsi_debug/<host_num>" while fio is running, the
CPU affinity is very good without any per-cpu magic ***. The
"misqueues" count (that is (cpu) miscues on queues) records the
number of times a timer or a workqueue gets its callback on a
different cpu, is extremely low, typically zero. I could get
non-zero numbers if I ran something else (e.g. a kernel build)
while fio was running, still the misqueues were well under 1% of
commands queued.
That said, I see very little performance improvement with
submit_queues=4 (the number of processors on my two test machines)
compared to submit_queues=1 which is effectively what the driver was
doing before this patch. So I'm open to suggestions, especially in
the form of code :-)
Also if we went for per-cpu structures should we worry about the
complex issue of a cpu being hot unplugged (or plugged back in)
while its queue was holding unfinished commands?
Doug Gilbert
*** you are probably correct that without the per-cpu "magic" lock
contention is likely the cause of so little performance
improvement in the multiple submit queues case.
Also while testing with fio, submit_queues=<num_of_cpus>,
'top -H' shows each fio thread at around 100%.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html