Re: CFQ idling kills I/O performance on ext4 with blkio cgroup controller

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ Resending this mail with a dropbox link to the traces (instead
of a file attachment), since it didn't go through the last time. ]

On 5/21/19 10:38 AM, Paolo Valente wrote:
> 
>> So, instead of only sending me a trace, could you please:
>> 1) apply this new patch on top of the one I attached in my previous email
>> 2) repeat your test and report results
> 
> One last thing (I swear!): as you can see from my script, I tested the
> case low_latency=0 so far.  So please, for the moment, do your test
> with low_latency=0.  You find the whole path to this parameter in,
> e.g., my script.
> 
No problem! :) Thank you for sharing patches for me to test!

I have good news :) Your patch improves the throughput significantly
when low_latency = 0.

Without any patch:

dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
10000+0 records in
10000+0 records out
5120000 bytes (5.1 MB, 4.9 MiB) copied, 58.0915 s, 88.1 kB/s


With both patches applied:

dd if=/dev/zero of=/root/test0.img bs=512 count=10000 oflag=dsync
10000+0 records in
10000+0 records out
5120000 bytes (5.1 MB, 4.9 MiB) copied, 3.87487 s, 1.3 MB/s

The performance is still not as good as mq-deadline (which achieves
1.6 MB/s), but this is a huge improvement for BFQ nonetheless!

A tarball with the trace output from the 2 scenarios you requested,
one with only the debug patch applied (trace-bfq-add-logs-and-BUG_ONs),
and another with both patches applied (trace-bfq-boost-injection) is
available here:

https://www.dropbox.com/s/pdf07vi7afido7e/bfq-traces.tar.gz?dl=0

Thank you!
 
Regards,
Srivatsa
VMware Photon OS



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux