Yes, the locks are requested from another node, what fs are you using, I don't think it should make any difference, but I can try it with the same fs. Make sure you are using v3, it does work for v4. Marc. From: Bruce Fields <bfields@xxxxxxxxxxxx> To: Marc Eshel/Almaden/IBM@IBMUS Cc: linux-nfs@xxxxxxxxxxxxxxx, Tomer Perry <TOMP@xxxxxxxxxx> Date: 07/01/2016 02:01 PM Subject: Re: grace period On Fri, Jul 01, 2016 at 01:46:42PM -0700, Marc Eshel wrote: > This is my v3 test that show the lock still there after echo 0 > > /proc/fs/nfsd/threads > > [root@sonascl21 ~]# cat /etc/redhat-release > Red Hat Enterprise Linux Server release 7.2 (Maipo) > > [root@sonascl21 ~]# uname -a > Linux sonascl21.sonasad.almaden.ibm.com 3.10.0-327.el7.x86_64 #1 SMP Thu > Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux > > [root@sonascl21 ~]# cat /proc/locks | grep 999 > 3: POSIX ADVISORY WRITE 2349 00:2a:489486 0 999 > > [root@sonascl21 ~]# echo 0 > /proc/fs/nfsd/threads > [root@sonascl21 ~]# cat /proc/fs/nfsd/threads > 0 > > [root@sonascl21 ~]# cat /proc/locks | grep 999 > 3: POSIX ADVISORY WRITE 2349 00:2a:489486 0 999 Huh, that's not what I see. Are you positive that's the lock on the backend filesystem and not the client-side lock (in case you're doing a loopback mount?) --b. > > > > > From: Bruce Fields <bfields@xxxxxxxxxxxx> > To: Marc Eshel/Almaden/IBM@IBMUS > Cc: linux-nfs@xxxxxxxxxxxxxxx > Date: 07/01/2016 01:07 PM > Subject: Re: grace period > > > > On Fri, Jul 01, 2016 at 10:31:55AM -0700, Marc Eshel wrote: > > It used to be that sending KILL signal to lockd would free locks and > start > > Grace period, and when setting nfsd threads to zero, nfsd_last_thread() > > calls nfsd_shutdown that called lockd_down that I believe was causing > both > > freeing of locks and starting grace period or maybe it was setting it > back > > to a value > 0 that started the grace period. > > OK, apologies, I didn't know (or forgot) that. > > > Any way starting with the kernels that are in RHEL7.1 and up echo 0 > > > /proc/fs/nfsd/threads doesn't do it anymore, I assume going to common > > grace period for NLM and NFSv4 changed things. > > The question is how to do IP fail-over, so when a node fails and the IP > is > > moving to another node, we need to go into grace period on all the nodes > > > in the cluster so the locks of the failed node are not given to anyone > > other than the client that is reclaiming his locks. Restarting NFS > server > > is to distractive. > > What's the difference? Just that clients don't have to reestablish tcp > connections? > > --b. > > > For NFSv3 KILL signal to lockd still works but for > > NFSv4 have no way to do it for v4. > > Marc. > > > > > > > > From: Bruce Fields <bfields@xxxxxxxxxxxx> > > To: Marc Eshel/Almaden/IBM@IBMUS > > Cc: linux-nfs@xxxxxxxxxxxxxxx > > Date: 07/01/2016 09:09 AM > > Subject: Re: grace period > > > > > > > > On Thu, Jun 30, 2016 at 02:46:19PM -0700, Marc Eshel wrote: > > > I see that setting the number of nfsd threads to 0 (echo 0 > > > > /proc/fs/nfsd/threads) is not releasing the locks and putting the > server > > > > > in grace mode. > > > > Writing 0 to /proc/fs/nfsd/threads shuts down knfsd. So it should > > certainly drop locks. If that's not happening, there's a bug, but we'd > > need to know more details (version numbers, etc.) to help. > > > > That alone has never been enough to start a grace period--you'd have to > > start knfsd again to do that. > > > > > What is the best way to go into grace period, in new version of the > > > kernel, without restarting the nfs server? > > > > Restarting the nfs server is the only way. That's true on older kernels > > true, as far as I know. (OK, you can apparently make lockd do something > > like this with a signal, I don't know if that's used much, and I doubt > > it works outside an NFSv3-only environment.) > > > > So if you want locks dropped and a new grace period, then you should run > > "systemctl restart nfs-server", or your distro's equivalent. > > > > But you're probably doing something more complicated than that. I'm not > > sure I understand the question.... > > > > --b. > > > > > > > > > > > > -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html