Re: [PATCH V10 03/19] block: use bio_for_each_bvec() to compute multi-page bvec count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 15, 2018 at 04:05:10PM -0500, Mike Snitzer wrote:
> On Thu, Nov 15 2018 at  3:20pm -0500,
> Omar Sandoval <osandov@xxxxxxxxxxx> wrote:
> 
> > On Thu, Nov 15, 2018 at 04:52:50PM +0800, Ming Lei wrote:
> > > First it is more efficient to use bio_for_each_bvec() in both
> > > blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how
> > > many multi-page bvecs there are in the bio.
> > > 
> > > Secondly once bio_for_each_bvec() is used, the bvec may need to be
> > > splitted because its length can be very longer than max segment size,
> > > so we have to split the big bvec into several segments.
> > > 
> > > Thirdly when splitting multi-page bvec into segments, the max segment
> > > limit may be reached, so the bio split need to be considered under
> > > this situation too.
> > > 
> > > Cc: Dave Chinner <dchinner@xxxxxxxxxx>
> > > Cc: Kent Overstreet <kent.overstreet@xxxxxxxxx>
> > > Cc: Mike Snitzer <snitzer@xxxxxxxxxx>
> > > Cc: dm-devel@xxxxxxxxxx
> > > Cc: Alexander Viro <viro@xxxxxxxxxxxxxxxxxx>
> > > Cc: linux-fsdevel@xxxxxxxxxxxxxxx
> > > Cc: Shaohua Li <shli@xxxxxxxxxx>
> > > Cc: linux-raid@xxxxxxxxxxxxxxx
> > > Cc: linux-erofs@xxxxxxxxxxxxxxxx
> > > Cc: David Sterba <dsterba@xxxxxxxx>
> > > Cc: linux-btrfs@xxxxxxxxxxxxxxx
> > > Cc: Darrick J. Wong <darrick.wong@xxxxxxxxxx>
> > > Cc: linux-xfs@xxxxxxxxxxxxxxx
> > > Cc: Gao Xiang <gaoxiang25@xxxxxxxxxx>
> > > Cc: Christoph Hellwig <hch@xxxxxx>
> > > Cc: Theodore Ts'o <tytso@xxxxxxx>
> > > Cc: linux-ext4@xxxxxxxxxxxxxxx
> > > Cc: Coly Li <colyli@xxxxxxx>
> > > Cc: linux-bcache@xxxxxxxxxxxxxxx
> > > Cc: Boaz Harrosh <ooo@xxxxxxxxxxxxxxx>
> > > Cc: Bob Peterson <rpeterso@xxxxxxxxxx>
> > > Cc: cluster-devel@xxxxxxxxxx
> > > Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx>
> > > ---
> > >  block/blk-merge.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++---------
> > >  1 file changed, 76 insertions(+), 14 deletions(-)
> > > 
> > > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > > index 91b2af332a84..6f7deb94a23f 100644
> > > --- a/block/blk-merge.c
> > > +++ b/block/blk-merge.c
> > > @@ -160,6 +160,62 @@ static inline unsigned get_max_io_size(struct request_queue *q,
> > >  	return sectors;
> > >  }
> > >  
> > > +/*
> > > + * Split the bvec @bv into segments, and update all kinds of
> > > + * variables.
> > > + */
> > > +static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv,
> > > +		unsigned *nsegs, unsigned *last_seg_size,
> > > +		unsigned *front_seg_size, unsigned *sectors)
> > > +{
> > > +	bool need_split = false;
> > > +	unsigned len = bv->bv_len;
> > > +	unsigned total_len = 0;
> > > +	unsigned new_nsegs = 0, seg_size = 0;
> > 
> > "unsigned int" here and everywhere else.
> 
> Curious why?  I've wondered what govens use of "unsigned" vs "unsigned
> int" recently and haven't found _the_ reason to pick one over the other.

My only reason to prefer unsigned int is consistency. unsigned int is
much more common in the kernel:

$ ag --cc -s 'unsigned\s+int' | wc -l
129632
$ ag --cc -s 'unsigned\s+(?!char|short|int|long)' | wc -l
22435

checkpatch also warns on plain unsigned.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux