On Mon, 2011-05-16 at 16:21 -0400, Chuck Lever wrote: > On May 16, 2011, at 3:43 PM, Trond Myklebust wrote: > > > On Mon, 2011-05-16 at 12:36 -0700, Harry Edmon wrote: > >> On 05/16/11 12:22, Chuck Lever wrote: > >>> On May 16, 2011, at 3:12 PM, Harry Edmon wrote: > >>> > >>> > >>>> Attached is 1000 lines of output from tshark when the problem is occurring. The client and server are connected by a private ethernet. > >>>> > >>> Disappointing: tshark is not telling us the return codes. However, I see "PUTFH;READ" then "RENEW" in a loop, which indicates the state manager thread is being kicked off because of ongoing difficulties with state recovery. Is there a stuck application on that client? > >>> > >>> Try again with "tshark -V". > >>> > >> Here is the output from tshark -V (first 50,000 lines). Nothing > >> appears to be stuck, and as I said when I reboot the client into 2.6.32 > >> the problem goes away, only to reappear when I reboot it back into 2.6.38.6. > >> > > > > Possibly, but it definitely indicates a server bug. What kind of server > > are you using? > > > > Basically, the client is getting confused because when it sends a READ, > > the server is telling it that the lease has expired, then when it sends > > a RENEW, the same server replies that the lease is OK... > > I've seen this during migration recovery testing. The client may be testing the wrong client ID. > > But I wonder if it's really worth doing that separate RENEW. I've seen the client send a RENEW after it gets STALE_STATEID. Would RENEW really tell the client anything in that case? It is needed. Without the RENEW, we have no idea whether or not we need to do a full state recovery. Running a full recovery when we don't have to is _bad_, and will usually cause us to lose delegations and may possibly even cause us to lose locks. -- Trond Myklebust Linux NFS client maintainer NetApp Trond.Myklebust@xxxxxxxxxx www.netapp.com -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html