Christoph Hellwig <hch@xxxxxxxxxxxxx> 于2020年3月31日周二 下午5:16写道: > > On Tue, Mar 31, 2020 at 04:45:33PM +0800, Weiping Zhang wrote: > > Christoph Hellwig <hch@xxxxxxxxxxxxx> 于2020年3月31日周二 下午4:25写道: > > > > > > On Fri, Mar 27, 2020 at 02:28:59PM +0800, Weiping Zhang wrote: > > > > Change-Id: Ibb9caf20616f83e111113ab5c824c05930c0e523 > > > > Signed-off-by: Weiping Zhang <zhangweiping@xxxxxxxxxxxxxx> > > > > > > This needs a commit description and loose the weird change id. > > > > > OK, I rewirte commit description, it record the timestamp of issue bio > > to the disk driver, > > then we can get the delta time in rq_qos_done_bio. It's same as the D2C time > > of blktrace. > > > I also think oyu need to fins a way to not bloat the bio even more, > > > cgroup is a really bad offender for bio size. > > struct request { > > u64 io_start_time_ns; > > also record this timestamp, I'll check if we can use it. > > But except for a few exceptions bios are never issued directly to the > driver, requests are. And the few exception (rsxx, umem) probably should > be rewritten to use requests. And with generic_{start,end}_io_acct we > already have helpers to track bio based stats, which we should not > duplicate just for cgroups. generic_{start,end}_io_acct and blk_account_io_done, these two method use the a timeline (part->stamp), but cgroup doesn't have, so cgroup cann't use these general helper to counting the total io ticks for read,write and others. Block cgroup use delta = now - bio->bi_issue[issue_time] to counting total io ticks. How about move it into the blk-iotrack code, rq_qos_issue will call the rq_qos_ops.issue, then if user doesn't enable blk-iotrack, these code will not be executed. Thanks