On Tue, 20 Sep 2022, Christoph Hellwig wrote: > > @@ -289,6 +308,23 @@ static void brd_submit_bio(struct bio *b > > struct bio_vec bvec; > > struct bvec_iter iter; > > > > + if (bio_op(bio) == REQ_OP_DISCARD) { > > + sector_t len = bio_sectors(bio); > > + sector_t front_pad = -sector & (PAGE_SECTORS - 1); > > + sector += front_pad; > > + if (unlikely(len <= front_pad)) > > + goto endio; > > + len -= front_pad; > > + len = round_down(len, PAGE_SECTORS); > > + while (len) { > > + brd_free_page(brd, sector); > > + sector += PAGE_SECTORS; > > + len -= PAGE_SECTORS; > > + cond_resched(); > > + } > > + goto endio; > > + } > > + > > bio_for_each_segment(bvec, bio, iter) { > > Please add separate helpers to each type of IO and just make the > main submit_bio method a dispatch on the types instead of this > spaghetti code. > > > + disk->queue->limits.discard_granularity = PAGE_SIZE; > > + blk_queue_max_discard_sectors(disk->queue, UINT_MAX); > > We'll probably want an opt in for this new feature. OK. I addressed these concerns and I'll send a second version of the patch set. Mikulas