hi Dennis Zhou and other who are familiar with io latency controller : i have a question about the io latency controller : how to configure the min latency value of a blk cgroup ? using the avg_lat value may be not right. from the Documentation/admin-guide/cgroup-v2.rst, i know we can do configuring work in this way: ------------------- Use the avg_lat value as a basis for your real setting, setting at 10-15% higher than the value in io.stat ------------------- but when i have found the avg_lat value is the total sum of running average of io latency in the past time, and it can't reflect the average time cost of per single IO request. but whether one thread can be throttled depends on the compare result of stat.mean and iolat->min_lat_nsec. and stat.mean can reflect the average time cost of per single IO request. so from above analysis, if do the configuring min io latency value work of a blk group, use the avg_lat may not be appropriate, because it is the total sum of running average. why not make use of the stat.mean to do the configuring min io latency value work ? one experiment on my device: cat io.stat 8:0 rbytes=586723328 wbytes=99248033792 rios=143243 wios=331782 dbytes=0 dios=0 use_delay=12 delay_nsec=0 depth=1 avg_lat=11526 win=800 so the avg_lat value 11526(ns) is so big, it can't be the average time cost of per single IO request on our device. so it can not be used for do the configuring min io latency value work. Maybe I'm not familiar with the io latency controller code. If any mistake exist in my above analysis, welcome to add your suggestions.