On Fri, 2023-08-11 at 19:08 +0000, Chuck Lever III wrote: > > > On Aug 11, 2023, at 3:06 PM, Jeff Layton <jlayton@xxxxxxxxxx> wrote: > > > > Yep. Reverting that patch seemed to fix the problem. Chuck, mind just > > dropping this patch from nfsd-next? > > I can, but let's make sure that doesn't break anything else first. > Send me any test results, and I'll run some tests here too. > > Sure, results attached from several runs. These are running "all" tests in pynfs for both v4.0 and v4.1. The kernels are: 6.5.0-rc5-00050-gb1e667caad15: your nfsd-next branch as of today 6.5.0-rc5+: gb1e667caad15, but with this patch reverted In summary: [jlayton@tleilax pynfs-results]$ grep '"failures":' * 6.4.0-v4.0.json: "failures": 0, 6.4.0-v4.1.json: "failures": 0, 6.5.0-rc5-00050-gb1e667caad15-v4.0.json: "failures": 17, 6.5.0-rc5-00050-gb1e667caad15-v4.1.json: "failures": 1, 6.5.0-rc5-v4.0.json: "failures": 3, 6.5.0-rc5+-v4.0.json: "failures": 0, 6.5.0-rc5-v4.1.json: "failures": 0, 6.5.0-rc5+-v4.1.json: "failures": 0, So, reverting this patch makes things look good in nfsd-next. The more worrisome problem is that 6.5.0-rc5-v4.0.json shows 3 regressions in current mainline. It looks like some sort of data corruption at first glance. I'm still looking at that one. -- Jeff Layton <jlayton@xxxxxxxxxx>
Attachment:
pynfs-results.tar.xz
Description: application/xz-compressed-tar