Re: Parallel shared to exclusive flock conversion blocks forever on single NFS client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 12.03.25 23:57, Trond Myklebust wrote:
On Wed, 2025-03-12 at 22:57 +0100, Tycho Kirchner wrote:
Dear NFS kernel developers,
In `man 2 flock` it is documented, that an existing lock can be
converted to a new lock mode. Multiple processes on the *same* client
converting their LOCK_SH to LOCK_EX quickly results in a deadlock of
the
client processes. This can already be reproduced on a single physical
machine, with for instance the NFS server running in a VM and the
host
machine connecting to it as a client.

Steps to reproduce:
- Setup a virtual machine with Virtualbox and install NFS-server
- Create an /etc/export: /home/VMUSER/nfs  10.0.2.2(rw,async)
- Create a NAT firewall rule forwarding NFS port 2049 to the VM
- Mount the export on the host, chdir it and create an empty file:
    $ sudo mount -t nfs 127.0.0.1:/home/VMUSER/nfs  /somedir
    $ cd /somedir
    $ touch foo
- Execute below attached ~/locktest.py in parallel on the client:
    $ for i in {1..10}; do ~/locktest.py foo & done; wait
- Wait half a minute. The command does not terminate. Ever.
- Abort execution with Ctrl+C and kill leftovers: pkill -f
locktest.py

Notes:
- According to my tests, from three concurrent client-processes
onwards,
the block quickly occurs.
- Placing a `fcntl.flock(a, fcntl.LOCK_UN)` before fcntl.LOCK_EX is
enough, so the deadlock never occurs.
- OR'ing `| fcntl.LOCK_NB` quickly results in endless
»BlockingIOError«
exceptions with no client process making any progress. See the also
attached ~/locktest_NB.py.
- Multiple distributions, Kernelversions and combinations tested,
e.g.
NFS-client KVER 6.6.67 on Debian12 and KVER 6.12.17-amd64 on
DebianTesting, or KVER 6.4.0-150600.23.38-default on openSUSE Leap
15.6.
The error was always and quickly reproducible.


The same manpage also states:

        Converting a lock (shared to exclusive, or vice versa) is not guaranteed
        to be atomic: the existing lock is first removed, and then a new lock is
        established.  Between these two steps, a pending lock request by another
        process may be granted, with  the  result  that  the  conversion  either
        blocks,  or  fails  if LOCK_NB was specified.  (This is the original BSD
        behavior, and occurs on many other implementations.)

so there is no harm in adding the LOCK_UN because you cannot expect
atomicity.

Thanks for the response, Trond. I also read this part of the manpage, but fail to understand, why that would justify a deadlock-scenario using the commands I described. On the contrary, in my understanding, the lack of atomicity actually makes it feasible for an implementation, to avoid the deadlock. Here's how:

  Process A          Process B      comment
LOCK_SH granted   _not_started_
…                 LOCK_SH granted
LOCK_EX blocking  …                A removes SH-lock and waits for B
…                 LOCK_EX granted  granted since A removed SH-lock
…                 LOCK_UN
LOCK_EX granted


However, I think the NFS-implementation incorrectly does _not_ remove the initial shared lock of A. As a result, the processes deadlock in the following way:

  Process A          Process B      comment
LOCK_SH granted   _not_started_
…                 LOCK_SH granted
LOCK_EX blocking  …                 A keeps SH-lock and waits for B
…                 LOCK_EX blocking  B keeps SH-lock and waits for A
DEADLOCK          DEADLOCK


This deadlock is unnecessary and I think the NFS implementation of flock conversions(or fcntl.F_SETLK) should be fixed.
Thanks, Tycho




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux