Re: CFQ idling kills I/O performance on ext4 with blkio cgroup controller

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/30/19 3:45 AM, Paolo Valente wrote:
> 
> 
>> Il giorno 30 mag 2019, alle ore 10:29, Srivatsa S. Bhat <srivatsa@xxxxxxxxxxxxx> ha scritto:
>>
[...]
>>
>> Your fix held up well under my testing :)
>>
> 
> Great!
> 
>> As for throughput, with low_latency = 1, I get around 1.4 MB/s with
>> bfq (vs 1.6 MB/s with mq-deadline). This is a huge improvement
>> compared to what it was before (70 KB/s).
>>
> 
> That's beautiful news!
> 
> So, now we have the best of the two worlds: maximum throughput and
> total control on I/O (including minimum latency for interactive and
> soft real-time applications).  Besides, no manual configuration
> needed.  Of course, this holds unless/until you find other flaws ... ;)
> 

Indeed, that's awesome! :)

>> With tracing on, the throughput is a bit lower (as expected I guess),
>> about 1 MB/s, and the corresponding trace file
>> (trace-waker-detection-1MBps) is available at:
>>
>> https://www.dropbox.com/s/3roycp1zwk372zo/bfq-traces.tar.gz?dl=0
>>
> 
> Thank you for the new trace.  I've analyzed it carefully, and, as I
> imagined, this residual 12% throughput loss is due to a couple of
> heuristics that occasionally get something wrong.  Most likely, ~12%
> is the worst-case loss, and if one repeats the tests, the loss may be
> much lower in some runs.
>

Ah, I see.
 
> I think it is very hard to eliminate this fluctuation while keeping
> full I/O control.  But, who knows, I might have some lucky idea in the
> future.
> 

:)

> At any rate, since you pointed out that you are interested in
> out-of-the-box performance, let me complete the context: in case
> low_latency is left set, one gets, in return for this 12% loss,
> a) at least 1000% higher responsiveness, e.g., 1000% lower start-up
> times of applications under load [1];
> b) 500-1000% higher throughput in multi-client server workloads, as I
> already pointed out [2].
> 

I'm very happy that you could solve the problem without having to
compromise on any of the performance characteristics/features of BFQ!


> I'm going to prepare complete patches.  In addition, if ok for you,
> I'll report these results on the bug you created.  Then I guess we can
> close it.
> 

Sounds great!

> [1] https://algo.ing.unimo.it/people/paolo/disk_sched/results.php
> [2] https://www.linaro.org/blog/io-bandwidth-management-for-production-quality-services/
> 
>> Thank you so much for your tireless efforts in fixing this issue!
>>
> 
> I did enjoy working on this with you: your test case and your support
> enabled me to make important improvements.  So, thank you very much
> for your collaboration so far,
> Paolo

My pleasure! :)
 
Regards,
Srivatsa
VMware Photon OS



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux