Re: [PATCH 0/1] improve brd performance with blk-mq

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/22/23 3:42?PM, Luis Chamberlain wrote:
> On Tue, Feb 21, 2023 at 04:51:21PM -0700, Jens Axboe wrote:
>> On 2/21/23 2:59?PM, Luis Chamberlain wrote:
>>> On Fri, Feb 17, 2023 at 08:22:15PM +0530, Pankaj Raghav wrote:
>>>> I will park this effort as blk-mq doesn't improve the performance for brd,
>>>> and we can retain the submit_bio interface.
>>>
>>> I am not sure if the feedback was intended to suggest we shouldn't do
>>> the blk-mq conversion, but rather explain why in some workloads it
>>> may not be as good as the old submit_bio() interface. Probably low
>>> hanging fruit, if we *really* wanted to provide parity for the odd
>>> workloads.
>>>
>>> If we *mostly*  we see better performance with blk-mq it would seem
>>> likely reasonable to merge. Dozens of drivers were converted to blk-mq
>>> and *most* without *any* performance justification on them. I think
>>> ming's was the commit log that had the most elaborate performacne
>>> metrics and I think it also showed some *minor* slowdown on some
>>> workloads, but the dramatic gains made it worthwhile.
>>>
>>> Most of the conversions to blk-mq didn't even have *any* metrics posted.
>>
>> You're comparing apples and oranges. I don't want to get into (fairly)
>> ancient history at this point, but the original implementation was honed
>> with the nvme conversion - which is the most performant driver/hardware
>> we have available.
>>
>> Converting something that doesn't need a scheduler, doesn't need
>> timeouts, doesn't benefit from merging, doesn't need tagging etc doesn't
>> make a lot of sense. If you need none of that, *of course* you're going
>> to see a slowdown from doing all of these extra things by default.
>> That's pretty obvious.
>>
>> This isn't about workloads at all.
> 
> I'm not arguing mq design over-architecture for simple devices. It is a
> given that if one doesn't need some of those things surely they can
> create a minor delta loss in performance. I'm asking and suggesting that
> despite a few workloads being affected with a *minor delta* for brd for mq
> conversion if the huge gains possible on some *other* workloads suffice for
> it to be converted over.
> 
> We're talking about + ~125% performance boost benefit for randreads.

Please actually read the whole thread. The boost there was due to brd
not supporting nowait, that has since been corrected. And the latest
numbers reflect that and how the expected outcome (bio > blk-mq for brd,
io_uring > aio for both).

-- 
Jens Axboe




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux