Re: [md PATCH 09/14] md/raid10: stop using bi_phys_segments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 16 2017, Jack Wang wrote:

> 2017-02-16 5:39 GMT+01:00 NeilBrown <neilb@xxxxxxxx>:
>> raid10 currently repurposes bi_phys_segments on each
>> incoming bio to count how many r10bio was used to encode the
>> request.
>>
>> We need to know when the number of attached r10bio reaches
>> zero to:
>> 1/ call bio_endio() when all IO on the bio is finished
>> 2/ decrement ->nr_pending so that resync IO can proceed.
>>
>> Now that the bio has its own __bi_remaining counter, that
>> can be used instead. We can call bio_inc_remaining to
>> increment the counter and call bio_endio() every time an
>> r10bio completes, rather than only when bi_phys_segments
>> reaches zero.
>>
>> This addresses point 1, but not point 2.  bio_endio()
>> doesn't (and cannot) report when the last r10bio has
>> finished, so a different approach is needed.
>>
>> So: instead of counting bios in ->nr_pending, count r10bios.
>> i.e. every time we attach a bio, increment nr_pending.
>> Every time an r10bio completes, decrement nr_pending.
>>
>> Normally we only increment nr_pending after first checking
>> that ->barrier is zero, or some other non-trivial tests and
>> possible waiting.  When attaching multiple r10bios to a bio,
>> we only need the tests and the waiting once.  After the
>> first increment, subsequent increments can happen
>> unconditionally as they are really all part of the one
>> request.
>>
>> So introduce inc_pending() which can be used when we know
>> that nr_pending is already elevated.
>>
>> Signed-off-by: NeilBrown <neilb@xxxxxxxx>
>> ---
>>  drivers/md/raid10.c |   76 +++++++++++++++++----------------------------------
>>  1 file changed, 25 insertions(+), 51 deletions(-)
>>
>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
>> index 9258cbe233bb..6b4d8643c574 100644
>> --- a/drivers/md/raid10.c
>> +++ b/drivers/md/raid10.c
>> @@ -301,27 +301,18 @@ static void reschedule_retry(struct r10bio *r10_bio)
>>  static void raid_end_bio_io(struct r10bio *r10_bio)
>>  {
>>         struct bio *bio = r10_bio->master_bio;
>> -       int done;
>>         struct r10conf *conf = r10_bio->mddev->private;
>>
>> -       if (bio->bi_phys_segments) {
>> -               unsigned long flags;
>> -               spin_lock_irqsave(&conf->device_lock, flags);
>> -               bio->bi_phys_segments--;
>> -               done = (bio->bi_phys_segments == 0);
>> -               spin_unlock_irqrestore(&conf->device_lock, flags);
>> -       } else
>> -               done = 1;
>>         if (!test_bit(R10BIO_Uptodate, &r10_bio->state))
>>                 bio->bi_error = -EIO;
>> -       if (done) {
>> -               bio_endio(bio);
>> -               /*
>> -                * Wake up any possible resync thread that waits for the device
>> -                * to go idle.
>> -                */
>> -               allow_barrier(conf);
>> -       }
>> +
>> +       /*
>> +        * Wake up any possible resync thread that waits for the device
>> +        * to go idle.
>> +        */
>> +       allow_barrier(conf);
>> +       bio_endio(bio);
>> +
>
> Hi Neil,
>
> Why do you switch the order of above 2 lines, is there a reason
> behind, I notice in raid1 you kept the order?

I cannot think why I would have done that.  It doesn't matter what
order they are in, but it does make sense to leave the order unchanged
unless there is a good reason.  I'll put it back.

Thanks,
NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux