Re: [PATCH v6 2/7] badblocks: add helper routines for badblock ranges handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> 2022年9月21日 20:13,Xiao Ni <xni@xxxxxxxxxx> 写道:

[snipped]

>> 
>> +/*
>> + * Find the range starts at-or-before bad->start. If 'hint' is provided
>> + * (hint >= 0) then search in the bad table from hint firstly. It is
>> + * very probably the wanted bad range can be found from the hint index,
>> + * then the unnecessary while-loop iteration can be avoided.
>> + */
>> +static int prev_badblocks(struct badblocks *bb, struct badblocks_context *bad,
>> +                         int hint)
>> +{
>> +       sector_t s = bad->start;
>> +       int ret = -1;
>> +       int lo, hi;
>> +       u64 *p;
>> +
>> +       if (!bb->count)
>> +               goto out;
>> +
>> +       if (hint >= 0) {
>> +               ret = prev_by_hint(bb, s, hint);
>> +               if (ret >= 0)
>> +                       goto out;
>> +       }
>> +
>> +       lo = 0;
>> +       hi = bb->count;
>> +       p = bb->page;
> 
> Is it better to check something like this:
> 
> if (BB_OFFSET(p[lo]) > s)
>   return ret;


Yeah, it is worthy to add such check to avoid the following bisect search, if lucky.
Will do it in next version.

> 
>> +
>> +       while (hi - lo > 1) {
>> +               int mid = (lo + hi)/2;
>> +               sector_t a = BB_OFFSET(p[mid]);
>> +
>> +               if (a == s) {
>> +                       ret = mid;
>> +                       goto out;
>> +               }
>> +
>> +               if (a < s)
>> +                       lo = mid;
>> +               else
>> +                       hi = mid;
>> +       }
>> +
>> +       if (BB_OFFSET(p[lo]) <= s)
>> +               ret = lo;
>> +out:
>> +       return ret;
>> +}
>> +

[snipped]

>> 
>> +/*
>> + * Combine the bad ranges indexed by 'prev' and 'prev - 1' (from bad
>> + * table) into one larger bad range, and the new range is indexed by
>> + * 'prev - 1'.
>> + */
>> +static void front_combine(struct badblocks *bb, int prev)
>> +{
>> +       u64 *p = bb->page;
>> +
>> +       p[prev - 1] = BB_MAKE(BB_OFFSET(p[prev - 1]),
>> +                             BB_LEN(p[prev - 1]) + BB_LEN(p[prev]),
>> +                             BB_ACK(p[prev]));
>> +       if ((prev + 1) < bb->count)
>> +               memmove(p + prev, p + prev + 1, (bb->count - prev - 1) * 8);
>            else
>                    p[prev] = 0;

The caller of front_combine() will decrease bb->count by 1, so clearing p[prev] here can be avoided. I will add code comments of front_combine to explain this.
Thanks.

[snipped]

>> +/*
>> + * Return 'true' if the range indicated by 'bad' can overwrite the bad
>> + * range (from bad table) indexed by 'prev'.
>> + *
>> + * The range indicated by 'bad' can overwrite the bad range indexed by
>> + * 'prev' when,
>> + * 1) The whole range indicated by 'bad' can cover partial or whole bad
>> + *    range (from bad table) indexed by 'prev'.
>> + * 2) The ack value of 'bad' is larger or equal to the ack value of bad
>> + *    range 'prev'.
> 
> In fact, it can overwrite only the ack value of 'bad' is larger than
> the ack value of the bad range 'prev'.
> If the ack values are equal, it should do a merge operation.

Yes you are right, if extra is 0, it is indeed a merge operation. And if extra is 1, or 2, it means bad blocks range split, I name such situation as overwrite.

[snipped]


>> +/*
>> + * Do the overwrite from the range indicated by 'bad' to the bad range
>> + * (from bad table) indexed by 'prev'.
>> + * The previously called can_front_overwrite() will provide how many
>> + * extra bad range(s) might be split and added into the bad table. All
>> + * the splitting cases in the bad table will be handled here.
>> + */
>> +static int front_overwrite(struct badblocks *bb, int prev,
>> +                          struct badblocks_context *bad, int extra)
>> +{
>> +       u64 *p = bb->page;
>> +       sector_t orig_end = BB_END(p[prev]);
>> +       int orig_ack = BB_ACK(p[prev]);
>> +
>> +       switch (extra) {
>> +       case 0:
>> +               p[prev] = BB_MAKE(BB_OFFSET(p[prev]), BB_LEN(p[prev]),
>> +                                 bad->ack);
>> +               break;
>> +       case 1:
>> +               if (BB_OFFSET(p[prev]) == bad->start) {
>> +                       p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
>> +                                         bad->len, bad->ack);
>> +                       memmove(p + prev + 2, p + prev + 1,
>> +                               (bb->count - prev - 1) * 8);
>> +                       p[prev + 1] = BB_MAKE(bad->start + bad->len,
>> +                                             orig_end - BB_END(p[prev]),
>> +                                             orig_ack);
>> +               } else {
>> +                       p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
>> +                                         bad->start - BB_OFFSET(p[prev]),
>> +                                         BB_ACK(p[prev]));
> 
> s/BB_ACK(p[prev])/orig_ack/g

Yeah, this one is better. I will use it in next version.


>> +                       /*
>> +                        * prev +2 -> prev + 1 + 1, which is for,
>> +                        * 1) prev + 1: the slot index of the previous one
>> +                        * 2) + 1: one more slot for extra being 1.
>> +                        */
>> +                       memmove(p + prev + 2, p + prev + 1,
>> +                               (bb->count - prev - 1) * 8);
>> +                       p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack);
>> +               }
>> +               break;
>> +       case 2:
>> +               p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
>> +                                 bad->start - BB_OFFSET(p[prev]),
>> +                                 BB_ACK(p[prev]));
> 
> s/BB_ACK(p[prev])/orig_ack/g

It will be used in next version.

> 
>> +               /*
>> +                * prev + 3 -> prev + 1 + 2, which is for,
>> +                * 1) prev + 1: the slot index of the previous one
>> +                * 2) + 2: two more slots for extra being 2.
>> +                */
>> +               memmove(p + prev + 3, p + prev + 1,
>> +                       (bb->count - prev - 1) * 8);
>> +               p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack);
>> +               p[prev + 2] = BB_MAKE(BB_END(p[prev + 1]),
>> +                                     orig_end - BB_END(p[prev + 1]),
>> +                                     BB_ACK(p[prev]));
> 
> s/BB_ACK(p[prev])/orig_ack/g


It will be used in next version.


>> +               break;
>> +       default:
>> +               break;
>> +       }
>> +
>> +       return bad->len;
>> +}
>> +
>> +/*

Thank you for the review!

Coly Li





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux