Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> Il giorno 13 dic 2018, alle ore 18:34, stan <stanl-fedorauser@xxxxxxxxxxx> ha scritto:
> 
> On Thu, 13 Dec 2018 17:46:30 +0100
> Paolo Valente <paolo.valente@xxxxxxxxxx> wrote:
> 
>>> Il giorno 13 dic 2018, alle ore 17:41, stan
>>> <stanl-fedorauser@xxxxxxxxxxx> ha scritto:
>>> 
> 
>> You don't have bfq for a comparison, but you can still get an idea of
>> how good your system is, by comparing these start-up times with how
>> long the same application takes to start when there is no I/O.  Just
>> do
>> 
>> sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 0 0 seq 3
>> "replay-startup-io gnometerm"
>> 
> cfq with the above command (without I/O):  *BIG* difference.
> 

Great! (for bfq :) )

> Latency statistics:
>         min         max         avg     std_dev     conf99%
>        1.34       1.704     1.53367    0.183118     3.66336
> Aggregated throughput:
>         min         max         avg     std_dev     conf99%
>           0        8.03     5.23143     2.60745     15.4099
> Read throughput:
>         min         max         avg     std_dev     conf99%
>           0        8.03     5.22571     2.60522     15.3967
> Write throughput:
>         min         max         avg     std_dev     conf99%
>           0        0.02  0.00571429  0.00786796   0.0464991
> 
>> and get ready to be surprised (next surprise when/if you'll try with
>> bfq ...)
> 
> I had a response saying that bfq isn't available for single queue
> devices, but there might be a workaround.  So it might or might not
> happen, depending on whether I can get it working.
> 

Actually, there's still a little confusion on this point.  First,
blk-mq *is not* only for multi-queue devices.  blk-mq is for any kind
of block device.  If you have a fast, single-queue SSD, then
blk-mq is likely to make it go faster.  If you have a multi-queue
drive, which implicitly means that your drive is very fast (according to the
current standards for 'fast'), then it is 100% sure that blk-mq is the
only way to utilize a high portion of the max speed of your
multi-queue monster.

To use blk-mq, i.e., to have blk-mq handle your storage, you need (only)
to tell the I/O stack that you want blk-mq to manage the I/O for
the driver of your storage.  In this respect, SCSI is for sure the
most used generic storage driver.  So, according to the instructions
already provided by others, you can have blk-mq handle your storage
device by, e.g., adding "scsi_mod.use_blk_mq=y" as kernel boot option.
Such a choice of yours is not constrained, in any respect, by the nature
of your drive, be it an SD Card, eMMC, HDD, SSD or whatever
you want.  As for multi-queue devices, they are handled by the
NVMe driver, and for that one only blk-mq is available.

Once you have switched to blk-mq for your drive, you will have the set
of I/O schedulers that live in blk-mq.  bfq is among these schedulers.
Actually, there is also an out-of-tree bfq available also for the good
old legacy block, but this is another story.

Finally, from 4.21 there will be no legacy block any longer.  Only
blk-mq will be available, so only blk-mq I/O schedulers will be
available.

Thanks for trying my tests,
Paolo


>>> cfq
>>> 
>>> Latency statistics:
>>>        min         max         avg     std_dev     conf99%
>>>     22.142      27.157     24.1967      2.6273     52.5604
>>> Aggregated throughput:
>>>        min         max         avg     std_dev     conf99%
>>>      67.29      139.74     105.491      19.245     39.7628
>>> Read throughput:
>>>        min         max         avg     std_dev     conf99%
>>>      51.73      135.67     102.402     21.3985     44.2123
>>> Write throughput:
>>>        min         max         avg     std_dev     conf99%
>>>       0.01       46.29     3.08857     8.37179     17.2972
>>> 
>>> noop
>>> 
>>> Latency statistics:
>>>        min         max         avg     std_dev     conf99%
>>>     40.861      42.021     41.3637    0.595266     11.9086
>>> Aggregated throughput:
>>>        min         max         avg     std_dev     conf99%
>>>      45.66       72.89     55.9847     5.99054     9.87365
>>> Read throughput:
>>>        min         max         avg     std_dev     conf99%
>>>      41.69       70.85     51.9495     6.02467      9.9299
>>> Write throughput:
>>>        min         max         avg     std_dev     conf99%
>>>          0         7.9     4.03527     1.62392     2.67656
> _______________________________________________
> devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
> Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux