RE: [PATCH] scsi: iscsi: prefer xmit of DataOut before new cmd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mike,

>On 6/7/22 10:55 AM, Mike Christie wrote:
>> On 6/7/22 8:19 AM, Dmitry Bogdanov wrote:
>>> In function iscsi_data_xmit (TX worker) there is walking through the
>>> queue of new SCSI commands that is replenished in parallell. And only
>>> after that queue got emptied the function will start sending pending
>>> DataOut PDUs. That lead to DataOut timer time out on target side and
>>> to connection reinstatment.
>>>
>>> This patch swaps walking through the new commands queue and the pending
>>> DataOut queue. To make a preference to ongoing commands over new ones.
>>>
>>
>> ...
>>
>>>              task = list_entry(conn->cmdqueue.next, struct iscsi_task,
>>> @@ -1594,28 +1616,10 @@ static int iscsi_data_xmit(struct iscsi_conn *conn)
>>>               */
>>>              if (!list_empty(&conn->mgmtqueue))
>>>                      goto check_mgmt;
>>> +            if (!list_empty(&conn->requeue))
>>> +                    goto check_requeue;
>>
>>
>>
>> Hey, I've been posting a similar patch:
>>
>> https://www.spinics.net/lists/linux-scsi/msg156939.html
>>
>> A problem I hit is a possible pref regression so I tried to allow
>> us to start up a burst of cmds in parallel. It's pretty simple where
>> we allow up to a queue's worth of cmds to start. It doesn't try to
>> check that all cmds are from the same queue or anything fancy to try
>> and keep the code simple. Mostly just assuming users might try to bunch
>> cmds together during submission or they might hit the queue plugging
>> code.
>>
>> What do you think?
>
>Oh yeah, what about a modparam batch_limit? It's between 0 and cmd_per_lun.
>0 would check after every transmission like above.

 Did you really face with a perf regression? I could not imagine how it is
possible.
DataOut PDU contains a data too, so a throughput performance cannot be
decreased by sending DataOut PDUs.

 The only thing is a latency performance. But that is not an easy question.
IMHO, a system should strive to reduce a maximum value of the latency almost
without impacting of a minimum value (prefer current commands) instead of
to reduce a minimum value of the latency to the detriment of maximum value
(prefer new commands).

 Any preference of new commands over current ones looks like an io scheduler
functionality, but on underlying layer, so to say a BUS layer.
I think is a matter of future investigation/development.

BR,
 Dmitry




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux