>>> Adam Hutchinson <ajhutchin@xxxxxxxxx> schrieb am 15.06.2022 um 20:57 in Nachricht <CAFU8FUgwMX_d85OG+qC+qTX-NpFiSVkwBtradzAmeJW-3PCmEQ@xxxxxxxxxxxxxx>: > Is there any reason not to use time as an indicator that pending R2Ts > need to be processed? Could R2Ts be tagged with a timestamp when > received and only given priority over new commands if the age of the > R2T at the head exceeds some configurable limit? This would guarantee > R2T will eventually be serviced even if the block layer doesn't reduce > the submission rate of new commands, it wouldn't remove the > performance benefits of the current algorithm which gives priority to > new commands and it would be a relatively simple solution. A > threshold of 0 could indicate that R2Ts should always be given > priority over new commands. Just a thought.. I had similar thought comparing SCSI command scheduling with process scheduling real-time scheduling can cause starvation when newer requests are postponed indefinitely, while the classic scheduler increases the chance of longer-waiting tasks to be scheduled next. In any case that would require some sorting of the queue (or searching for a maximum/minimum in the requests which is equivalent). Regards, Ulrich > > Regards, > Adam > > On Wed, Jun 15, 2022 at 11:37 AM Mike Christie > <michael.christie@xxxxxxxxxx> wrote: >> >> On 6/7/22 8:19 AM, Dmitry Bogdanov wrote: >> > In function iscsi_data_xmit (TX worker) there is walking through the >> > queue of new SCSI commands that is replenished in parallell. And only >> > after that queue got emptied the function will start sending pending >> > DataOut PDUs. That lead to DataOut timer time out on target side and >> > to connection reinstatment. >> > >> > This patch swaps walking through the new commands queue and the pending >> > DataOut queue. To make a preference to ongoing commands over new ones. >> > >> > Reviewed-by: Konstantin Shelekhin <k.shelekhin@xxxxxxxxx> >> > Signed-off-by: Dmitry Bogdanov <d.bogdanov@xxxxxxxxx> >> >> Let's do this patch. I've tried so many combos of implementations and >> they all have different perf gains or losses with different workloads. >> I've already been going back and forth with myself for over a year >> (the link for my patch in the other mail was version N) and I don't >> think a common solution is going to happen. >> >> You patch fixes the bug, and I've found a workaround for my issue >> where I tweak the queue depth, so I think we will be ok. >> >> Reviewed-by: Mike Christie <michael.christie@xxxxxxxxxx> >> >> -- >> You received this message because you are subscribed to the Google Groups > "open-iscsi" group. >> To unsubscribe from this group and stop receiving emails from it, send an > email to open-iscsi+unsubscribe@xxxxxxxxxxxxxxxx. >> To view this discussion on the web visit > https://groups.google.com/d/msgid/open-iscsi/237bed01-819a-55be-5163-274fac3b > 61e6%40oracle.com. > > > > -- > "Things turn out best for the people who make the best out of the way > things turn out." - Art Linkletter > > -- > You received this message because you are subscribed to the Google Groups > "open-iscsi" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to open-iscsi+unsubscribe@xxxxxxxxxxxxxxxx. > To view this discussion on the web visit > https://groups.google.com/d/msgid/open-iscsi/CAFU8FUgwMX_d85OG%2BqC%2BqTX-NpF > iSVkwBtradzAmeJW-3PCmEQ%40mail.gmail.com.