Re: NFS server regression in kernel 5.13 (tested w/ 5.13.9)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Chuck:

I see that the patch was merged to Linus' branch, but there have been
2 stable patch releases since and the patch hasn't been pulled in. You
mentioned I should reach out to the stable maintainers in this
instance, is the stable@xxxxxxxxxxxxxxx list the appropriate place to
make such a request?

Thanks.

- mike

On Sat, Sep 4, 2021 at 7:02 PM Chuck Lever III <chuck.lever@xxxxxxxxxx> wrote:
>
>
> > On Sep 4, 2021, at 1:41 PM, Mike Javorski <mike.javorski@xxxxxxxxx> wrote:
> >
> > Hi Chuck.
> >
> > I noticed that you sent in the 5.15 pull request but Neil's fix
> > (e38b3f20059426a0adbde014ff71071739ab5226 in your tree) missed the
> > pull and thus the fix isn't going to be backported to 5.14 in the near
> > term. Is there another 5.15 pull planned in the not too distant future
> > so this will get flagged for back-porting,
>
> Yes. The final version of Neil’s patch was just a little late for the initial v5.15 NFSD pull request (IMO) so it’s queued for the next PR, probably this week.
>
>
> > or do I need to reach out to someone to expressly pull it into 5.14? If the latter, can you
> > point me in the right direction of who to ask (I assume it's someone
> > other than Greg KH)?
> >
> > Thanks
> >
> > - mike
> >
> >
> >> On Sat, Aug 28, 2021 at 11:23 AM Chuck Lever III <chuck.lever@xxxxxxxxxx> wrote:
> >>
> >>
> >>
> >>>> On Aug 27, 2021, at 11:22 PM, Mike Javorski <mike.javorski@xxxxxxxxx> wrote:
> >>>
> >>> I had some time this evening (and the kernel finally compiled), and
> >>> wanted to get this tested.
> >>>
> >>> The TL;DR:  Both patches are needed
> >>>
> >>> Below are the test results from my replication of Neil's test. It is
> >>> readily apparent that both the 5.13.13 kernel AND the 5.13.13 kernel
> >>> with the 82011c80b3ec fix exhibit the randomness in read times that
> >>> were observed. The 5.13.13 kernel with both the 82011c80b3ec and
> >>> f6e70aab9dfe fixes brings the performance back in line with the
> >>> 5.12.15 kernel which I tested as a baseline.
> >>>
> >>> Please forgive the inconsistency in sample counts. This was running as
> >>> a while loop, and I just let it go long enough that the behavior was
> >>> consistent. Only change to the VM between tests was the different
> >>> kernel + a reboot. The testing PC had a consistent workload during the
> >>> entire set of tests.
> >>>
> >>> Test 0: 5.13.10 (base kernel in VM image, just for kicks)
> >>> ==================================================
> >>> Samples 30
> >>> Min 6.839
> >>> Max 19.998
> >>> Median 9.638
> >>> 75-P 10.898
> >>> 95-P 12.939
> >>> 99-P 18.005
> >>>
> >>> Test 1: 5.12.15 (known good)
> >>> ==================================================
> >>> Samples 152
> >>> Min 1.997
> >>> Max 2.333
> >>> Median 2.171
> >>> 75-P 2.230
> >>> 95-P 2.286
> >>> 99-P 2.312
> >>>
> >>> Test 2: 5.13.13 (known bad)
> >>> ==================================================
> >>> Samples 42
> >>> Min 3.587
> >>> Max 15.803
> >>> Median 6.039
> >>> 75-P 6.452
> >>> 95-P 10.293
> >>> 99-P 15.540
> >>>
> >>> Test 3: 5.13.13 + 82011c80b3ec fix
> >>> ==================================================
> >>> Samples 44
> >>> Min 4.309
> >>> Max 37.040
> >>> Median 6.615
> >>> 75-P 10.224
> >>> 95-P 19.516
> >>> 99-P 36.650
> >>>
> >>> Test 4: 5.13.13 + 82011c80b3ec fix + f6e70aab9dfe fix
> >>> ==================================================
> >>> Samples 131
> >>> Min 2.013
> >>> Max 2.397
> >>> Median 2.169
> >>> 75-P 2.211
> >>> 95-P 2.283
> >>> 99-P 2.348
> >>>
> >>> I am going to run the kernel w/ both fixes over the weekend, but
> >>> things look good at this point.
> >>>
> >>> - mike
> >>
> >> I've targeted Neil's fix for the first 5.15-rc NFSD pull request.
> >> I'd like to have Mel's Reviewed-by or Acked-by, though.
> >>
> >> I will add a Fixes: tag if Neil doesn't repost (no reason to at
> >> this point) so the fix should get backported automatically to
> >> recent stable kernels.
> >>
> >>
> >>> On Fri, Aug 27, 2021 at 4:49 PM Chuck Lever III <chuck.lever@xxxxxxxxxx> wrote:
> >>>>
> >>>>
> >>>>> On Aug 27, 2021, at 6:00 PM, Mike Javorski <mike.javorski@xxxxxxxxx> wrote:
> >>>>>
> >>>>> OK, an update. Several hours of spaced out testing sessions and the
> >>>>> first patch seems to have resolved the issue. There may be a very tiny
> >>>>> bit of lag that still occurs when opening/processing new files, but so
> >>>>> far on this kernel I have not had any multi-second freezes. I am still
> >>>>> waiting on the kernel with Neil's patch to compile (compiling on this
> >>>>> underpowered server so it's taking several hours), but I think the
> >>>>> testing there will just be to see if I can show it works still, and
> >>>>> then to try and test in a memory constrained VM. To see if I can
> >>>>> recreate Neil's experiment. Likely will have to do this over the
> >>>>> weekend given the kernel compile delay + fiddling with a VM.
> >>>>
> >>>> Thanks for your testing!
> >>>>
> >>>>
> >>>>> Chuck: I don't mean to overstep bounds, but is it possible to get that
> >>>>> patch pulled into 5.13 stable? That may help things for several people
> >>>>> while 5.14 goes through it's shakedown in archlinux prior to release.
> >>>>
> >>>> The patch had a Fixes: tag, so it should get automatically backported
> >>>> to every kernel that has the broken commit. If you don't see it in
> >>>> a subsequent 5.13 stable kernel, you are free to ask the stable
> >>>> maintainers to consider it.
> >>>>
> >>>>
> >>>>> - mike
> >>>>>
> >>>>> On Fri, Aug 27, 2021 at 10:07 AM Mike Javorski <mike.javorski@xxxxxxxxx> wrote:
> >>>>>>
> >>>>>> Chuck:
> >>>>>> I just booted a 5.13.13 kernel with your suggested patch. No freezes
> >>>>>> on the first test, but that sometimes happens so I will let the server
> >>>>>> settle some and try it again later in the day (which also would align
> >>>>>> with Neil's comment on memory fragmentation being a contributor).
> >>>>>>
> >>>>>> Neil:
> >>>>>> I have started a compile with the above kernel + your patch to test
> >>>>>> next unless you or Chuck determine that it isn't needed, or that I
> >>>>>> should test both patches discreetly. As the above is already merged to
> >>>>>> 5.14 it seemed logical to just add your patch on top.
> >>>>>>
> >>>>>> I will also try to set up a vm to test your md5sum scenario with the
> >>>>>> various kernels since it's a much faster thing to test.
> >>>>>>
> >>>>>> - mike
> >>>>>>
> >>>>>> On Fri, Aug 27, 2021 at 7:13 AM Chuck Lever III <chuck.lever@xxxxxxxxxx> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>> On Aug 27, 2021, at 3:14 AM, NeilBrown <neilb@xxxxxxx> wrote:
> >>>>>>>>
> >>>>>>>> Subject: [PATCH] SUNRPC: don't pause on incomplete allocation
> >>>>>>>>
> >>>>>>>> alloc_pages_bulk_array() attempts to allocate at least one page based on
> >>>>>>>> the provided pages, and then opportunistically allocates more if that
> >>>>>>>> can be done without dropping the spinlock.
> >>>>>>>>
> >>>>>>>> So if it returns fewer than requested, that could just mean that it
> >>>>>>>> needed to drop the lock.  In that case, try again immediately.
> >>>>>>>>
> >>>>>>>> Only pause for a time if no progress could be made.
> >>>>>>>
> >>>>>>> The case I was worried about was "no pages available on the
> >>>>>>> pcplist", in which case, alloc_pages_bulk_array() resorts
> >>>>>>> to calling __alloc_pages() and returns only one new page.
> >>>>>>>
> >>>>>>> "No progess" would mean even __alloc_pages() failed.
> >>>>>>>
> >>>>>>> So this patch would behave essentially like the
> >>>>>>> pre-alloc_pages_bulk_array() code: call alloc_page() for
> >>>>>>> each empty struct_page in the array without pausing. That
> >>>>>>> seems correct to me.
> >>>>>>>
> >>>>>>>
> >>>>>>> I would add
> >>>>>>>
> >>>>>>> Fixes: f6e70aab9dfe ("SUNRPC: refresh rq_pages using a bulk page allocator")
> >>>>>>>
> >>>>>>>
> >>>>>>>> Signed-off-by: NeilBrown <neilb@xxxxxxx>
> >>>>>>>> ---
> >>>>>>>> net/sunrpc/svc_xprt.c | 7 +++++--
> >>>>>>>> 1 file changed, 5 insertions(+), 2 deletions(-)
> >>>>>>>>
> >>>>>>>> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> >>>>>>>> index d66a8e44a1ae..99268dd95519 100644
> >>>>>>>> --- a/net/sunrpc/svc_xprt.c
> >>>>>>>> +++ b/net/sunrpc/svc_xprt.c
> >>>>>>>> @@ -662,7 +662,7 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
> >>>>>>>> {
> >>>>>>>>    struct svc_serv *serv = rqstp->rq_server;
> >>>>>>>>    struct xdr_buf *arg = &rqstp->rq_arg;
> >>>>>>>> -     unsigned long pages, filled;
> >>>>>>>> +     unsigned long pages, filled, prev;
> >>>>>>>>
> >>>>>>>>    pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT;
> >>>>>>>>    if (pages > RPCSVC_MAXPAGES) {
> >>>>>>>> @@ -672,11 +672,14 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
> >>>>>>>>            pages = RPCSVC_MAXPAGES;
> >>>>>>>>    }
> >>>>>>>>
> >>>>>>>> -     for (;;) {
> >>>>>>>> +     for (prev = 0;; prev = filled) {
> >>>>>>>>            filled = alloc_pages_bulk_array(GFP_KERNEL, pages,
> >>>>>>>>                                            rqstp->rq_pages);
> >>>>>>>>            if (filled == pages)
> >>>>>>>>                    break;
> >>>>>>>> +             if (filled > prev)
> >>>>>>>> +                     /* Made progress, don't sleep yet */
> >>>>>>>> +                     continue;
> >>>>>>>>
> >>>>>>>>            set_current_state(TASK_INTERRUPTIBLE);
> >>>>>>>>            if (signalled() || kthread_should_stop()) {
> >>>>>>>
> >>>>>>> --
> >>>>>>> Chuck Lever
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>
> >>>> --
> >>>> Chuck Lever
> >>>>
> >>>>
> >>>>
> >>
> >> --
> >> Chuck Lever
> >>
> >>
> >>




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux