On Wed, May 26, 2021 at 7:00 PM Guoqing Jiang <jgq516@xxxxxxxxx> wrote: > [...] > >> (bioset_exit -> bioset_integrity_free) is triggered. > >> > >> I thought we probably need a comment here given it is not explicit. > > I think it is better to handle it within this function. Does it have > > any downside > > to call bioset_integrity_free(&mddev->bio_set) here? > > > > [...] > > md_run has to deal with failure path by call bioset_exit, which > already call bioset_integrity_free implicitly. Why the additional > call of bioset_integrity_free would be helpful? Or do you want > to remove bioset_exit from md_run as well? I guess you are right. Removing this one. > > >>>> + > >>>> + if (!blk_queue_io_stat((*bio)->bi_bdev->bd_disk->queue)) > >>>> + return; > >>> Added blk_queue_flag_set(QUEUE_FLAG_IO_STAT, mddev->queue); to md_run. > >>> We still need it as md doesn't use mq. Without it, the default iostats is 0. > >>> > >> It enables io accounting by default, so raid5 and raid0 users have to > >> disable it if they don't want the additional latency. > > iostats was on by default before this set, as we didn't check > > blk_queue_io_stat(). > > So it is better to keep the same behavior. > > Could you point the place where md enables iostats before the set? > I can't find relevant code for it. Before this set, we did not set QUEUE_FLAG_IO_STAT, and we didn't check blk_queue_io_stat() either. So even with /sys/block/mdX/queue/iostats of 0, we still get iostats for the md device. By setting QUEUE_FLAG_IO_STAT with the patch, the users still get iostats working by default. Does this make sense? Thanks, Song