Re: [PATCH] md: raid0: account for split bio in iostat accounting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 10, 2023 at 4:29 PM Paul Menzel <pmenzel@xxxxxxxxxxxxx> wrote:
>
> Dear Laurence,
>
>
> Am 10.08.23 um 14:12 schrieb Laurence Oberman:
> > On Thu, 2023-08-10 at 08:42 +0200, Paul Menzel wrote:
>
> >> Am 09.08.23 um 19:16 schrieb David Jeffery:
> >>> When a bio is split by md raid0, the newly created bio will not be
> >>> tracked
> >>> by md for I/O accounting. Only the portion of I/O still assigned to
> >>> the
> >>> original bio which was reduced by the split will be accounted for.
> >>> This
> >>> results in md iostat data sometimes showing I/O values far below
> >>> the actual
> >>> amount of data being sent through md.
> >>>
> >>> md_account_bio() needs to be called for all bio generated by the
> >>> bio split.
> >>
> >> Could you please add how you tested this?
> >>
> >>> Signed-off-by: David Jeffery <djeffery@xxxxxxxxxx>
> >>> Tested-by: Laurence Oberman <loberman@xxxxxxxxxx>
> >>> ---
> >>>    drivers/md/raid0.c | 3 +--
> >>>    1 file changed, 1 insertion(+), 2 deletions(-)
> >>>
> >>> diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
> >>> index d1ac73fcd852..1fd559ac8c68 100644
> >>> --- a/drivers/md/raid0.c
> >>> +++ b/drivers/md/raid0.c
> >>> @@ -597,8 +597,7 @@ static bool raid0_make_request(struct mddev
> >>> *mddev, struct bio *bio)
> >>>                  bio = split;
> >>>          }
> >>>
> >>> -       if (bio->bi_pool != &mddev->bio_set)
> >>> -               md_account_bio(mddev, &bio);
> >>> +       md_account_bio(mddev, &bio);
> >>>
> >>>          orig_sector = sector;
> >>>          zone = find_zone(mddev->private, &sector);
>
> > This was actually reported by a customer on RHEL9.2 and the way this
> > played out is they were reading the md raid device directly using dd
> > buffered reads.
> > The md raid serves an LVM volume and file system.
> >
> > Using dd if=/dev/md0 of=/dev/null bs=1024K count=10000 you would
> > sporadically see iostat report these numbers where the raid md0 shows
> > invalid stats.
> >
> >       tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn Device
> >     221.00       111.0M         0.0k     111.0M       0.0k dm-12
> >     222.00       110.0M         0.0k     110.0M       0.0k dm-15
> >     223.00       111.0M         0.0k     111.0M       0.0k dm-16
> >     220.00       109.0M         0.0k     109.0M       0.0k dm-17
> >     221.00       111.0M         0.0k     111.0M       0.0k dm-18
> >     221.00       111.0M         0.0k     111.0M       0.0k dm-19
> >     219.00       109.0M         0.0k     109.0M       0.0k dm-20
> >     219.00       110.0M         0.0k     110.0M       0.0k dm-22
> >     880.00         6.9M         0.0k       6.9M       0.0k md0
> >                  ******
> >
> > After patching with David's patch the issue is no longer reproducible.
> > We tested using the same method that reproduced the issue reported by
> > the customer.
>
> Thank you for the detailed reply. Should David resent, it’d be great, if
> that information could be added.

Hi David,

Please resend the set with testing information and the Fixes tag.

Thanks,
Song




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux