Re: [PATCH 1/2] block: Rework bio_for_each_segment_all()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 27, 2023 at 01:44:01PM -0400, Kent Overstreet wrote:
> This patch reworks bio_for_each_segment_all() to be more inline with how
> the other bio iterators work:
> 
>  - bio_iter_all_peek() now returns a synthesized bio_vec; we don't stash
>    one in the iterator and pass a pointer to it - bad. This way makes it
>    clearer what's a constructed value vs. a reference to something
>    pre-existing, and it also will help with cleaning up and
>    consolidating code with bio_for_each_folio_all().
> 
>  - We now provide bio_for_each_segment_all_continue(), for squashfs:
>    this makes their code clearer.
> 
> Signed-off-by: Kent Overstreet <kent.overstreet@xxxxxxxxx>
> Cc: Jens Axboe <axboe@xxxxxxxxx>
> Cc: linux-block@xxxxxxxxxxxxxxx
> Cc: Ming Lei <ming.lei@xxxxxxxxxx>
> Cc: Phillip Lougher <phillip@xxxxxxxxxxxxxxx>
> ---
>  block/bio.c               | 38 ++++++++++++------------
>  block/blk-map.c           | 38 ++++++++++++------------
>  block/bounce.c            | 12 ++++----
>  drivers/md/bcache/btree.c |  8 ++---
>  drivers/md/dm-crypt.c     | 10 +++----
>  drivers/md/raid1.c        |  4 +--
>  fs/btrfs/disk-io.c        | 10 +++----
>  fs/btrfs/extent_io.c      | 52 ++++++++++++++++-----------------
>  fs/btrfs/inode.c          |  8 ++---
>  fs/btrfs/raid56.c         | 18 ++++++------
>  fs/crypto/bio.c           |  8 ++---
>  fs/erofs/zdata.c          |  4 +--
>  fs/ext4/page-io.c         |  8 ++---
>  fs/ext4/readpage.c        |  4 +--
>  fs/f2fs/data.c            | 20 ++++++-------
>  fs/gfs2/lops.c            | 10 +++----
>  fs/gfs2/meta_io.c         |  8 ++---
>  fs/mpage.c                |  4 +--
>  fs/squashfs/block.c       | 48 ++++++++++++++++--------------
>  fs/squashfs/lz4_wrapper.c | 17 ++++++-----
>  fs/squashfs/lzo_wrapper.c | 17 ++++++-----
>  fs/verity/verify.c        |  4 +--
>  include/linux/bio.h       | 31 +++++++++++++++-----
>  include/linux/bvec.h      | 61 ++++++++++++++++++++++-----------------
>  24 files changed, 239 insertions(+), 203 deletions(-)
> 
> diff --git a/block/bio.c b/block/bio.c
> index ab59a491a8..8d3abe249e 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -1170,13 +1170,13 @@ bool bio_add_folio(struct bio *bio, struct folio *folio, size_t len,
>  
>  void __bio_release_pages(struct bio *bio, bool mark_dirty)
>  {
> -	struct bvec_iter_all iter_all;
> -	struct bio_vec *bvec;
> +	struct bvec_iter_all iter;
> +	struct bio_vec bvec;
>  
> -	bio_for_each_segment_all(bvec, bio, iter_all) {
> -		if (mark_dirty && !PageCompound(bvec->bv_page))
> -			set_page_dirty_lock(bvec->bv_page);
> -		put_page(bvec->bv_page);
> +	bio_for_each_segment_all(bvec, bio, iter) {
> +		if (mark_dirty && !PageCompound(bvec.bv_page))
> +			set_page_dirty_lock(bvec.bv_page);
> +		put_page(bvec.bv_page);
>  	}

bio_for_each_segment_all is supposed to be used by bio which owns the
bvec table, so it is just fine to return bvec pointer by bio_for_each_segment_all
to save extra bvec copy.

And the change becomes not efficient any more.


Thanks,
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux