> Il giorno 30 mag 2019, alle ore 10:29, Srivatsa S. Bhat <srivatsa@xxxxxxxxxxxxx> ha scritto: > > On 5/29/19 12:41 AM, Paolo Valente wrote: >> >> >>> Il giorno 29 mag 2019, alle ore 03:09, Srivatsa S. Bhat <srivatsa@xxxxxxxxxxxxx> ha scritto: >>> >>> On 5/23/19 11:51 PM, Paolo Valente wrote: >>>> >>>>> Il giorno 24 mag 2019, alle ore 01:43, Srivatsa S. Bhat <srivatsa@xxxxxxxxxxxxx> ha scritto: >>>>> >>>>> When trying to run multiple dd tasks simultaneously, I get the kernel >>>>> panic shown below (mainline is fine, without these patches). >>>>> >>>> >>>> Could you please provide me somehow with a list *(bfq_serv_to_charge+0x21) ? >>>> >>> >>> Hi Paolo, >>> >>> Sorry for the delay! Here you go: >>> >>> (gdb) list *(bfq_serv_to_charge+0x21) >>> 0xffffffff814bad91 is in bfq_serv_to_charge (./include/linux/blkdev.h:919). >>> 914 >>> 915 extern unsigned int blk_rq_err_bytes(const struct request *rq); >>> 916 >>> 917 static inline unsigned int blk_rq_sectors(const struct request *rq) >>> 918 { >>> 919 return blk_rq_bytes(rq) >> SECTOR_SHIFT; >>> 920 } >>> 921 >>> 922 static inline unsigned int blk_rq_cur_sectors(const struct request *rq) >>> 923 { >>> (gdb) >>> >>> >>> For some reason, I've not been able to reproduce this issue after >>> reporting it here. (Perhaps I got lucky when I hit the kernel panic >>> a bunch of times last week). >>> >>> I'll test with your fix applied and see how it goes. >>> >> >> Great! the offending line above gives me hope that my fix is correct. >> If no more failures occur, then I'm eager (and a little worried ...) >> to see how it goes with throughput :) >> > > Your fix held up well under my testing :) > Great! > As for throughput, with low_latency = 1, I get around 1.4 MB/s with > bfq (vs 1.6 MB/s with mq-deadline). This is a huge improvement > compared to what it was before (70 KB/s). > That's beautiful news! So, now we have the best of the two worlds: maximum throughput and total control on I/O (including minimum latency for interactive and soft real-time applications). Besides, no manual configuration needed. Of course, this holds unless/until you find other flaws ... ;) > With tracing on, the throughput is a bit lower (as expected I guess), > about 1 MB/s, and the corresponding trace file > (trace-waker-detection-1MBps) is available at: > > https://www.dropbox.com/s/3roycp1zwk372zo/bfq-traces.tar.gz?dl=0 > Thank you for the new trace. I've analyzed it carefully, and, as I imagined, this residual 12% throughput loss is due to a couple of heuristics that occasionally get something wrong. Most likely, ~12% is the worst-case loss, and if one repeats the tests, the loss may be much lower in some runs. I think it is very hard to eliminate this fluctuation while keeping full I/O control. But, who knows, I might have some lucky idea in the future. At any rate, since you pointed out that you are interested in out-of-the-box performance, let me complete the context: in case low_latency is left set, one gets, in return for this 12% loss, a) at least 1000% higher responsiveness, e.g., 1000% lower start-up times of applications under load [1]; b) 500-1000% higher throughput in multi-client server workloads, as I already pointed out [2]. I'm going to prepare complete patches. In addition, if ok for you, I'll report these results on the bug you created. Then I guess we can close it. [1] https://algo.ing.unimo.it/people/paolo/disk_sched/results.php [2] https://www.linaro.org/blog/io-bandwidth-management-for-production-quality-services/ > Thank you so much for your tireless efforts in fixing this issue! > I did enjoy working on this with you: your test case and your support enabled me to make important improvements. So, thank you very much for your collaboration so far, Paolo > Regards, > Srivatsa > VMware Photon OS
Attachment:
signature.asc
Description: Message signed with OpenPGP