Re: [PATCH 4/4] dm: support I/O polling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 3/3/21 6:09 PM, Mikulas Patocka wrote:
> 
> 
> On Wed, 3 Mar 2021, JeffleXu wrote:
> 
>>
>>
>> On 3/3/21 3:05 AM, Mikulas Patocka wrote:
>>
>>> Support I/O polling if submit_bio_noacct_mq_direct returned non-empty
>>> cookie.
>>>
>>> Signed-off-by: Mikulas Patocka <mpatocka@xxxxxxxxxx>
>>>
>>> ---
>>>  drivers/md/dm.c |    5 +++++
>>>  1 file changed, 5 insertions(+)
>>>
>>> Index: linux-2.6/drivers/md/dm.c
>>> ===================================================================
>>> --- linux-2.6.orig/drivers/md/dm.c	2021-03-02 19:26:34.000000000 +0100
>>> +++ linux-2.6/drivers/md/dm.c	2021-03-02 19:26:34.000000000 +0100
>>> @@ -1682,6 +1682,11 @@ static void __split_and_process_bio(stru
>>>  		}
>>>  	}
>>>  
>>> +	if (ci.poll_cookie != BLK_QC_T_NONE) {
>>> +		while (atomic_read(&ci.io->io_count) > 1 &&
>>> +		       blk_poll(ci.poll_queue, ci.poll_cookie, true)) ;
>>> +	}
>>> +
>>>  	/* drop the extra reference count */
>>>  	dec_pending(ci.io, errno_to_blk_status(error));
>>>  }
>>
>> It seems that the general idea of your design is to
>> 1) submit *one* split bio
>> 2) blk_poll(), waiting the previously submitted split bio complets
> 
> No, I submit all the bios and poll for the last one.
> 
>> and then submit next split bio, repeating the above process. I'm afraid
>> the performance may be an issue here, since the batch every time
>> blk_poll() reaps may decrease.
> 
> Could you benchmark it?
> 

I will once I finished some other issues.


>> Besides, the submitting routine and polling routine is bound together
>> here, i.e., polling is always synchronous.
> 
> __split_and_process_bio calls __split_and_process_non_flush in a loop

I also noticed that you sent this patch.
https://patchwork.kernel.org/project/dm-devel/patch/alpine.LRH.2.02.2103010457510.631@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/

I agree with you that this while() loop here is unnecessary. And thus
there's no loop calling __split_and_process_non_flush() in
__split_and_process_bio().


> __split_and_process_non_flush records the poll cookie in ci.poll_cookie. 
> When we processed all the bios, we poll for the last cookie here:
> 
>         if (ci.poll_cookie != BLK_QC_T_NONE) {
>                 while (atomic_read(&ci.io->io_count) > 1 &&
>                        blk_poll(ci.poll_queue, ci.poll_cookie, true)) ;
>         }

So what will happen if one bio submitted to dm device crosses the device
boundary among several target devices (e.g., dm-stripe)? Please refer
the following call graph.

```
submit_bio
  __submit_bio_noacct
    disk->fops->submit_bio(), calling into __split_and_process_bio(),
call __split_and_process_non_flush() once, submitting the *first* split bio
    disk->fops->submit_bio(), calling into __split_and_process_bio(),
call __split_and_process_non_flush() once, submitting the *second* split bio
    ...
```


So the loop is in __submit_bio_noacct(), rather than
__split_and_process_bio(). Your design will send the first split bio,
and then poll on this split bio, then send the next split bio, polling
on this, go on and on...

-- 
Thanks,
Jeffle

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux