On Thu, Jul 21, 2022 at 8:12 PM Coly Li <colyli@xxxxxxx> wrote: > > This patch rewrites badblocks_check() with similar coding style as > _badblocks_set() and _badblocks_clear(). The only difference is bad > blocks checking may handle multiple ranges in bad tables now. > > If a checking range covers multiple bad blocks range in bad block table, > like the following condition (C is the checking range, E1, E2, E3 are > three bad block ranges in bad block table), > +------------------------------------+ > | C | > +------------------------------------+ > +----+ +----+ +----+ > | E1 | | E2 | | E3 | > +----+ +----+ +----+ > The improved badblocks_check() algorithm will divide checking range C > into multiple parts, and handle them in 7 runs of a while-loop, > +--+ +----+ +----+ +----+ +----+ +----+ +----+ > |C1| | C2 | | C3 | | C4 | | C5 | | C6 | | C7 | > +--+ +----+ +----+ +----+ +----+ +----+ +----+ > +----+ +----+ +----+ > | E1 | | E2 | | E3 | > +----+ +----+ +----+ > And the start LBA and length of range E1 will be set as first_bad and > bad_sectors for the caller. > > The return value rule is consistent for multiple ranges. For example if > there are following bad block ranges in bad block table, > Index No. Start Len Ack > 0 400 20 1 > 1 500 50 1 > 2 650 20 0 > the return value, first_bad, bad_sectors by calling badblocks_set() with > different checking range can be the following values, > Checking Start, Len Return Value first_bad bad_sectors > 100, 100 0 N/A N/A > 100, 310 1 400 10 > 100, 440 1 400 10 > 100, 540 1 400 10 > 100, 600 -1 400 10 > 100, 800 -1 400 10 The question here is that what's the usage of the return value? Now the callers only check if the return value is 0 or not. > > In order to make code review easier, this patch names the improved bad > block range checking routine as _badblocks_check() and does not change > existing badblock_check() code yet. Later patch will delete old code of > badblocks_check() and make it as a wrapper to call _badblocks_check(). > Then the new added code won't mess up with the old deleted code, it will > be more clear and easier for code review. > > Signed-off-by: Coly Li <colyli@xxxxxxx> > Cc: Dan Williams <dan.j.williams@xxxxxxxxx> > Cc: Geliang Tang <geliang.tang@xxxxxxxx> > Cc: Hannes Reinecke <hare@xxxxxxx> > Cc: Jens Axboe <axboe@xxxxxxxxx> > Cc: NeilBrown <neilb@xxxxxxx> > Cc: Vishal L Verma <vishal.l.verma@xxxxxxxxx> > Cc: Xiao Ni <xni@xxxxxxxxxx> > --- > block/badblocks.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 97 insertions(+) > > diff --git a/block/badblocks.c b/block/badblocks.c > index d3fa53594aa7..cbc79f056f74 100644 > --- a/block/badblocks.c > +++ b/block/badblocks.c > @@ -1261,6 +1261,103 @@ static int _badblocks_clear(struct badblocks *bb, sector_t s, int sectors) > return rv; > } > > +/* Do the exact work to check bad blocks range from the bad block table */ > +static int _badblocks_check(struct badblocks *bb, sector_t s, int sectors, > + sector_t *first_bad, int *bad_sectors) > +{ > + int unacked_badblocks, acked_badblocks; > + int prev = -1, hint = -1, set = 0; > + struct badblocks_context bad; > + unsigned int seq; > + int len, rv; > + u64 *p; > + > + WARN_ON(bb->shift < 0 || sectors == 0); > + > + if (bb->shift > 0) { > + sector_t target; > + > + /* round the start down, and the end up */ > + target = s + sectors; > + rounddown(s, bb->shift); > + roundup(target, bb->shift); > + sectors = target - s; > + } > + > +retry: > + seq = read_seqbegin(&bb->lock); > + > + p = bb->page; > + unacked_badblocks = 0; > + acked_badblocks = 0; > + > +re_check: > + bad.start = s; > + bad.len = sectors; > + > + if (badblocks_empty(bb)) { > + len = sectors; > + goto update_sectors; > + } > + > + prev = prev_badblocks(bb, &bad, hint); > + > + /* start after all badblocks */ > + if ((prev + 1) >= bb->count && !overlap_front(bb, prev, &bad)) { > + len = sectors; > + goto update_sectors; > + } It's same with patch 4 here about !overlap_front > + It doesn't check prev<0 situation here. Is it right? The prev can be -1 here. overlap_front will check p[-1]. > + if (overlap_front(bb, prev, &bad)) { > + if (BB_ACK(p[prev])) > + acked_badblocks++; > + else > + unacked_badblocks++; > + > + if (BB_END(p[prev]) >= (s + sectors)) > + len = sectors; > + else > + len = BB_END(p[prev]) - s; > + > + if (set == 0) { > + *first_bad = BB_OFFSET(p[prev]); > + *bad_sectors = BB_LEN(p[prev]); > + set = 1; > + } > + goto update_sectors; > + } > + > + /* Not front overlap, but behind overlap */ > + if ((prev + 1) < bb->count && overlap_behind(bb, &bad, prev + 1)) { > + len = BB_OFFSET(p[prev + 1]) - bad.start; > + hint = prev + 1; > + goto update_sectors; > + } same with patch 4 here Regards Xiao > + > + /* not cover any badblocks range in the table */ > + len = sectors; > + > +update_sectors: > + s += len; > + sectors -= len; > + > + if (sectors > 0) > + goto re_check; > + > + WARN_ON(sectors < 0); > + > + if (unacked_badblocks > 0) > + rv = -1; > + else if (acked_badblocks > 0) > + rv = 1; > + else > + rv = 0; > + > + if (read_seqretry(&bb->lock, seq)) > + goto retry; > + > + return rv; > +} > > /** > * badblocks_check() - check a given range for bad sectors > -- > 2.35.3 >