Re: 3.0+ NFS issues (bisected)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 17, 2012 at 03:18:00PM -0400, J. Bruce Fields wrote:
> On Fri, Aug 17, 2012 at 09:29:40PM +0400, Michael Tokarev wrote:
> > On 17.08.2012 21:26, Michael Tokarev wrote:
> > > On 17.08.2012 21:18, J. Bruce Fields wrote:
> > >> On Fri, Aug 17, 2012 at 09:12:38PM +0400, Michael Tokarev wrote:
> > > []
> > >>> So we're calling svc_recv in a tight loop, eating
> > >>> all available CPU.  (The above is with just 2 nfsd
> > >>> threads).
> > >>>
> > >>> Something is definitely wrong here.  And it happens mure more
> > >>> often after the mentioned commit (f03d78db65085).
> > >>
> > >> Oh, neat.  Hm.  That commit doesn't really sound like the cause, then.
> > >> Is that busy-looping reproduceable on kernels before that commit?
> > > 
> > > Note I bisected this issue to this commit.  I haven't seen it
> > > happening before this commit, and reverting it from 3.0 or 3.2
> > > kernel makes the problem to go away.
> > > 
> > > I guess it is looping there:
> > > 
> > > 
> > > net/sunrpc/svc_xprt.c:svc_recv()
> > > ...
> > >         len = 0;
> > > ...
> > >         if (test_bit(XPT_LISTENER, &xprt->xpt_flags)) {
> > > ...
> > >         } else if (xprt->xpt_ops->xpo_has_wspace(xprt)) {  <=== here -- has no wspace due to memory...
> > > ...  len = <something>
> > >         }
> > > 
> > >         /* No data, incomplete (TCP) read, or accept() */
> > >         if (len == 0 || len == -EAGAIN)
> > >                 goto out;
> > > ...
> > > out:
> > >         rqstp->rq_res.len = 0;
> > >         svc_xprt_release(rqstp);
> > >         return -EAGAIN;
> > > }
> > > 
> > > I'm trying to verify this theory...
> > 
> > Yes.  I inserted a printk there, and all these million times while
> > we're waiting in this EAGAIN loop, this printk is triggering:
> > 
> > ....
> > [21052.533053]  svc_recv: !has_wspace
> > [21052.533070]  svc_recv: !has_wspace
> > [21052.533087]  svc_recv: !has_wspace
> > [21052.533105]  svc_recv: !has_wspace
> > [21052.533122]  svc_recv: !has_wspace
> > [21052.533139]  svc_recv: !has_wspace
> > [21052.533156]  svc_recv: !has_wspace
> > [21052.533174]  svc_recv: !has_wspace
> > [21052.533191]  svc_recv: !has_wspace
> > [21052.533208]  svc_recv: !has_wspace
> > [21052.533226]  svc_recv: !has_wspace
> > [21052.533244]  svc_recv: !has_wspace
> > [21052.533265] calling svc_recv: 1228163 times (err=-4)
> > [21052.533403] calling svc_recv: 1226616 times (err=-4)
> > [21052.534520] nfsd: last server has exited, flushing export cache
> > 
> > (I stopped nfsd since it was flooding the log).
> > 
> > I can only guess that before that commit, we always had space,
> > now we don't anymore, and are looping like crazy.
> 
> Thanks!  But, arrgh--that should be enough to go on at this point, but
> I'm not seeing it.  If has_wspace is returning false then it's likely
> also returning false to the call at the start of svc_xprt_enqueue()

Wait a minute, that assumption's a problem because that calculation
depends in part on xpt_reserved, which is changed here....

In particular, svc_xprt_release() calls svc_reserve(rqstp, 0), which
subtracts rqstp->rq_reserved and then calls svc_xprt_enqueue, now with a
lower xpt_reserved value.  That could well explain this.

--b.

> (see
> svc_xprt_has_something_to_do), which means the xprt shouldn't be getting
> requeued and the next svc_recv call should find no socket ready (so
> svc_xprt_dequeue() returns NULL), and goes to sleep.
> 
> But clearly it's not working that way....
> 
> --b.
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux