On Mon, Oct 03, 2022 at 11:09:06AM +0300, Sagi Grimberg wrote: >> make up the multipath device. Only the low-level driver can do that right now, >> so perhaps either call into the driver to get all the block_device parts, or >> the gendisk needs to maintain a list of those parts itself. > > I definitely don't think we want to propagate the device relationship to > blk-mq. But a callback to the driver also seems very niche to nvme > multipath and is also kinda messy to combine calculations like > iops/bw/latency accurately which depends on the submission distribution > to the bottom devices which we would need to track now. > > I'm leaning towards just moving forward with this, take the relatively > small hit, and if people absolutely care about the extra latency, then > they can disable it altogether (upper and/or bottom devices). So looking at the patches I'm really not a big fan of the extra accounting calls, and especially the start_time field in the nvme_request and even more so the special start/end calls in all the transport drivers. the stats sysfs attributes already have the entirely separate blk-mq vs bio based code pathes. So I think having a block_device operation that replaces part_stat_read_all which allows nvme to iterate over all pathes and collect the numbers would seem a lot nicer. There might be some caveats like having to stash away the numbers for disappearing paths, though.