Re: client caching and locks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 06 Jan 2022, 'bfields@xxxxxxxxxxxx' wrote:

> +Locking can also provide cache consistency:
>  .P
> -NLM supports advisory file locks only.
> -To lock NFS files, use
> -.BR fcntl (2)
> -with the F_GETLK and F_SETLK commands.
> -The NFS client converts file locks obtained via
> -.BR flock (2)
> -to advisory locks.
> +Before acquiring a file lock, the client revalidates its cached data for
> +the file.  Before releasing a write lock, the client flushes to the
> +server's stable storage any data in the locked range.

Surely the client revalidates *after* acquiring the lock on the server. 
Otherwise the revalidation has now value.

>  .P
> -When mounting servers that do not support the NLM protocol,
> -or when mounting an NFS server through a firewall
> -that blocks the NLM service port,
> -specify the
> -.B nolock
> -mount option. NLM locking must be disabled with the
> -.B nolock
> -option when using NFS to mount
> -.I /var
> -because
> -.I /var
> -contains files used by the NLM implementation on Linux.
> +A distributed application running on multiple NFS clients can take a
> +read lock for each range that it reads and a write lock for each range that
> +it writes.  On its own, however, that is insufficient to ensure that
> +reads get up-to-date data.
>  .P
> -Specifying the
> -.B nolock
> -option may also be advised to improve the performance
> -of a proprietary application which runs on a single client
> -and uses file locks extensively.
> +When revalidating caches, the client is unable to reliably determine the
> +difference between changes made by other clients and changes it made
> +itself.  Therefore, such an application would also need to prevent
> +concurrent writes from multiple clients, either by taking whole-file
> +locks on every write or by some other method.

This looks like it is documenting a bug - I would much rather the bug be
fixed.

If a client opens/reads/closes a file while no other client has the file
open, then it *must* return current data.  Currently (according to
reports) it does not reliably do this.

If a write from this client races with a write from another client
(whether or not locking is used), the fact that fetching the change attr
is not atomic w.r.t IO means that the client *cannot* trust any cached
data after it has closed a file to which it wrote to - unless it had a
delegation.
Hmm.. that sounds a bit convoluted.

1/ If a client opens a file for write but does not get a delegation, and
   then writes to the file, then when it closes the file it *must*
   invalidate any cached data as there could have been a concurrent
   write from another client which is not visible in the changeid
   information. CTO consistency rules allow the client to keep cached
   data up to the close.
2/ If a client opens a file for write and *does* get a delegation, then
   providing it gets a changeid from the server after final write and
   before returning the delegation, it can keep all cached data (until
   the server reports a new changeid).

Note that the inability to cache in '1' *should* *not* be a performance
problem in practice.
a/ if locking is used, cached data is not trusted anyway, so no loss
b/ if locking is not used, then no concurrency is expected, so
   delegations are to be expected, so case '1' doesn't apply.

NeilBrown



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux