RE: [PATCH] scsi: iscsi: prefer xmit of DataOut before new cmd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mike,

>>On 6/8/22 9:16 AM, Dmitriy Bogdanov wrote:
>>> On 6/7/22 10:55 AM, Mike Christie wrote:
>>>> On 6/7/22 8:19 AM, Dmitry Bogdanov wrote:
>>>>> In function iscsi_data_xmit (TX worker) there is walking through the
>>>>> queue of new SCSI commands that is replenished in parallell. And only
>>>>> after that queue got emptied the function will start sending pending
>>>>> DataOut PDUs. That lead to DataOut timer time out on target side and
>>>>> to connection reinstatment.
>>>>>
>>>>> This patch swaps walking through the new commands queue and the pending
>>>>> DataOut queue. To make a preference to ongoing commands over new ones.
>>>>>
>>>>
>>>> ...
>>>>
>>>>>              task = list_entry(conn->cmdqueue.next, struct iscsi_task,
>>>>> @@ -1594,28 +1616,10 @@ static int iscsi_data_xmit(struct iscsi_conn *conn)
>>>>>               */
>>>>>              if (!list_empty(&conn->mgmtqueue))
>>>>>                      goto check_mgmt;
>>>>> +            if (!list_empty(&conn->requeue))
>>>>> +                    goto check_requeue;
>>>>
>>>>
>>>>
>>>> Hey, I've been posting a similar patch:
>>>>
>>>> https://urldefense.com/v3/__https://www.spinics.net/lists/linux-scsi/msg156939.html__;!!ACWV5N9M2RV99hQ!LHLghPLuyBZadpsGme03-HBoowa8sNiZYMKxKoz5E_BNu-M9-BiuNV_JS9kFxhnumNfhrxuR7qVdIaOH5X7iTfMO$
>>>>
>>>> A problem I hit is a possible pref regression so I tried to allow
>>>> us to start up a burst of cmds in parallel. It's pretty simple where
>>>> we allow up to a queue's worth of cmds to start. It doesn't try to
>>>> check that all cmds are from the same queue or anything fancy to try
>>>> and keep the code simple. Mostly just assuming users might try to bunch
>>>> cmds together during submission or they might hit the queue plugging
>>>> code.
>>>>
>>>> What do you think?
>>>
>>> Oh yeah, what about a modparam batch_limit? It's between 0 and cmd_per_lun.
>>> 0 would check after every transmission like above.
>>
>>  Did you really face with a perf regression? I could not imagine how it is
>> possible.
>> DataOut PDU contains a data too, so a throughput performance cannot be
>> decreased by sending DataOut PDUs.
>
>
>We can agree that queue plugging and batching improves throughput right?
>The app or block layer may try to batch commands. It could be with something
>like fio's batch args or you hit the block layer queue plugging.

I agree that those features 100% gives an improvement of a throughput on local
devices on serial interfaces like SATA1. Since SATA2 (Native Command Queuing)
devices can reorder incoming commmands to provide the best thoughput.
SCSI I guess has similar feature from the very beginning.
But on remote devices (iSCSI and FC) it is not 100% - initiators's command
order may be reordered by the network protocol nature itself. I mean 1PDU vs
R2T+DataOut PDUs, PDU resending due to crc errors or something like that.

>With the current code we can end up sending all cmds to the target in a way
>the target can send them to the real device batched. For example, we send off
>the initial N scsi command PDUs in one run of iscsi_data_xmit. The target reads
>them in, and sends off N R2Ts. We are able to read N R2Ts in the same call.
>And again we are able to send the needed data for them in one call of
>iscsi_data_xmit. The target is able to read in the data and send off the
>WRITEs to the physical device in a batch.
>
>With your patch, we can end up not batching them like the app/block layer
>intended. For example, we now call iscsi_data_xmit and in the cmdqueue loop.
>We've sent N - M scsi cmd PDUs, then see that we've got an incoming R2T to
>handle. So we goto check_requeue. We send the needed data. The target then
>starts to send the cmd to the physical device. If we have read in multiple
>R2Ts then we will continue the requeue loop. And so we might be able to send
>the data fast enough that the target can then send those commands to the
>physical device. But we've now broken up the batching the upper layers sent
>to us and we were doing before.

In my head everything is vice-versa :)
Current code breaks a batching by sending new commands instead of completing
the transmission of current commands that could be in its own batch.
With my patch the batches will be received (in best effort manner) on target
port in the same order as block layer sends to iSCSI layer. BTW, another target
ports may receive other commands in meantime and the real device will receive
broken batch anyway. The initiator side cannot guarantee batching on the real
device.

Lets imagine, that block layer submits two batches of big commands and then
submits single small commands, the command queue will be 
"ABCD EFGH" + "S" + "S" + "S" + "S" 
and that what will be sent out:
current code (worst case):    A1B1C1D1 E1F1G1H1 A2 S B2 S C2D2 E2 S F2 S G2H2 (breaks batch)
current code (best case):     A1B1C1D1 E1F1G1H1 SSSS A2B2C2D2 E2F2G2H2 (reorder)
current code (bug addressed): A1B1C1D1 E1F1G1H1 SS...S (connection fail)
current code (!impossible):   A1B1C1D1 E1F1G1H1 A2B2C2D2 E2F2G2H2 SSSS (inorder)
with my patch (best case):    A1B1C1D1 E1F1G1H1 A2B2C2D2 E2F2G2H2 SSSS (inorder)
with my patch (your case):    A1B1C1D1 E1 A2 F1 B2 G1H1 C2 E2 D2 F2G2H2 SSSS (still inorder)
with my patch (worst case):   A1B1C1D1 E1F1G1H1 A2 S B2 S C2D2 E2 S F2 S G2H2 (breaks batch)

My better best case (command order as block layer submits) will never happen in
the current code.
If "S" comes in parrallel, the worst cases are the same, both may break a batch
by new commands. But if "S" are already in the queue, my patch will produce
(most likely) the best case instead of the worst case.


>>  The only thing is a latency performance. But that is not an easy question.
>
>Agree latency is important and that's why I was saying we can make it config
>option. Users can continue to try and batch their cmds and we don't break
>them. We also fix the bug in that we don't get stuck in the cmdqueue loop
>always taking in new cmds.

>> IMHO, a system should strive to reduce a maximum value of the latency almost
>> without impacting of a minimum value (prefer current commands) instead of
>> to reduce a minimum value of the latency to the detriment of maximum value
>> (prefer new commands).
>>
>>  Any preference of new commands over current ones looks like an io scheduler
>
>I can see your point of view where you see it as preferring new cmds
>vs existing. It's probably due to my patch not hooking into commit_rqs
>and trying to figure out the batching exactly. It's more of a simple
>estimate.
>
>However, that's not what I'm talking about. I'm talking about the block
>layer / iosched has sent us these commands as a batch. We are now more
>likely to break that up.

No, my patch does not break a batching more than current code, instead it
decreases the probability of a break the currently sending batch by trying to
transmit in the same order as they come from block layer.

It's very complicated question - how to schedule "new vs old" commands. It's
not just a counter. Even with a counter there are a lot of questions - should
it begin from the cmdqueue loop or from iscsi_data_xmit, why it has static
limit, per LUN or not, and so on and so on.
That is a new IO scheduler on bus layer. 

>> I think is a matter of future investigation/development.
I am not against of it at all. But it should not delay a simple bugfix patch.

BR,
 Dmitry





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux