Re: [PATCH v2] block: BFQ default for single queue devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> Il giorno 16 ott 2018, alle ore 18:14, Federico Motta <federico@xxxxxxxxx> ha scritto:
> 
> On 10/15/18 5:02 PM, Bart Van Assche wrote:
>> On Mon, 2018-10-15 at 16:10 +0200, Linus Walleij wrote:
>>> + * For blk-mq devices, we default to using:
>>> + * - "none" for multiqueue devices (nr_hw_queues != 1)
>>> + * - "bfq", if available, for single queue devices
>>> + * - "mq-deadline" if "bfq" is not available for single queue devices
>>> + * - "none" for single queue devices as well as last resort
>> 
>> For SATA SSDs nr_hw_queues == 1 so this patch will also affect these SSDs.
>> Since this patch is an attempt to improve performance, I'd like to see
>> measurement data for one or more recent SATA SSDs before a decision is
>> taken about what to do with this patch. 
>> 
>> Thanks,
>> 
>> Bart.
>> 
> 
> Hi,
> although these tests should be run for single-queue devices, I tried to
> run them on an NVMe high-performance device. Imho if results are good
> in such a "difficult to deal with" multi-queue device, they should be
> good enough also in a "simpler" single-queue storage device..
> 
> Testbed specs:
> kernel = 4.18.0 (from bfq dev branch [1], where bfq already contains
>                 also the commits that will be available from 4.20)
> fs     = ext4
> drive  = ssd samsung 960 pro NVMe m.2 512gb
> 
> Device data sheet specs state that under random IO:
> * QD  1 thread 1
>  * read  = 14 kIOPS
>  * write = 50 kIOPS
> * QD 32 thread 4
>  * read = write = 330 kIOPS
> 
> What follows is a results summary; under requests I can give all
> results. The workload notation (e.g. 5r5w-seq) means:
> - num_readers                  (5r)
> - num_writers                  (5w)
> - sequential_io or random_io   (-seq)
> 
> 
> # replayed gnome-terminal startup time (lower is better)
> workload  bfq-mq [s]  none [s]  % gain
> --------  ----------  --------  ------
> 10r-seq    0.3725      2.79     86.65
> 5r5w-seq    0.9725      5.53     82.41
> 
> # throughput (higher is better)
> workload   bfq-mq [mb/s]  none [mb/s]   % gain
> ---------  -------------  -----------  -------
> 10r-rand       394.806      429.735    -8.128
> 10r-seq       1387.63      1431.81     -3.086
>  1r-seq        838.13       798.872     4.914
> 5r5w-rand      1118.12      1297.46    -13.822
> 5r5w-seq       1187         1313.8      -9.651
> 

A little unexpectedly for me, throughput loss for random I/O is even
lower than what I have obtained with my nasty SATA SSD (and reported
in my public results).

I didn't expect that little loss with sequential parallel reads.
Probably, when going multiqueue, there are changes I haven't even
thought about (I have never even tested bfq on a multi-queue device).

Thanks,
Paolo

> Thanks,
> Federico
> 
> [1] https://github.com/Algodev-github/bfq-mq/commits/bfq-mq





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux