Re: [PATCH v13 19/19] nfs: add FAQ section to Documentation/filesystems/nfs/localio.rst

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 28, 2024 at 09:41:05AM +1000, NeilBrown wrote:
> On Wed, 28 Aug 2024, Trond Myklebust wrote:
> > On Wed, 2024-08-28 at 07:49 +1000, NeilBrown wrote:
> > > On Tue, 27 Aug 2024, Trond Myklebust wrote:
> > > > > 
> > > > > 
> > > > > > On Aug 25, 2024, at 9:56 PM, NeilBrown <neilb@xxxxxxx> wrote:
> > > > > > 
> > > > > > While I'm not advocating for an over-the-wire request to map a
> > > > > > filehandle to a struct nfsd_file*, I don't think you can
> > > > > > convincingly
> > > > > > argue against it without concrete performance measurements.
> > > > > 
> > > > 
> > > > What is the value of doing an open over the wire? What are you
> > > > trying
> > > > to accomplish that can't be accomplished without going over the
> > > > wire?
> > > 
> > > The advantage of going over the wire is avoiding code duplication.
> > > The cost is latency.  Obviously the goal of LOCALIO is to find those
> > > points where the latency saving justifies the code duplication.
> > > 
> > > When opening with AUTH_UNIX the code duplication to determine the
> > > correct credential is small and easy to review.  If we ever wanted to
> > > support KRB5 or TLS I would be a lot less comfortable about reviewing
> > > the code duplication.
> > > 
> > > So I think it is worth considering whether an over-the-wire open is
> > > really all that costly.  As I noted we already have an over-the-wire
> > > request at open time.  We could conceivably send the LOCALIO-OPEN
> > > request at the same time so as not to add latency.  We could receive
> > > the
> > > reply through the in-kernel backchannel so there is no RPC reply.
> > > 
> > > That might all be too complex and might not be justified.  My point
> > > is
> > > that I think the trade-offs are subtle and I think the FAQ answer
> > > cuts
> > > off an avenue that hasn't really been explored.
> > > 
> > 
> > So, your argument is that if there was a hypothetical situation where
> > we wanted to add krb5 or TLS support, then we'd have more code to
> > review?
> > 
> > The counter-argument would be that we've already established the right
> > of the client to do I/O to the file. This will already have been done
> > by an over-the-wire call to OPEN (NFSv4), ACCESS (NFSv3/NFSv4) or
> > CREATE (NFSv3). Those calls will have used krb5 and/or TLS to
> > authenticate the user. All that remains to be done is perform the I/O
> > that was authorised by those calls.
> 
> The other thing that remains is to get the correct 'struct cred *' to
> store in ->f_cred (or to use for lookup in the nfsd filecache).
> 
> > 
> > Furthermore, we'd already have established that the client and the
> > knfsd instance are running in the same kernel space on the same
> > hardware (whether real or virtualised). There is no chance for a bad
> > actor to compromise the one without also compromising the other.
> > However, let's assume that somehow is possible: How does throwing in an
> > on-the-wire protocol that is initiated by the one and interpreted by
> > the other going to help, given that both have access to the exact same
> > RPCSEC_GSS/TLS session and shared secret information via shared kernel
> > memory?
> > 
> > So again, what problem are you trying to fix?
> 
> Conversely:  what exactly is this FAQ entry trying to argue against?
>
> My current immediate goal is for the FAQ to be useful.  It mostly is,
> but this one question/answer isn't clear to me.

The current answer to question 6 isn't meant to be dealing in
absolutes, nor does it have to (but I agree that "negating any
benefit" should be softened given we don't _know_ how it'd play out
without implementing open-over-the-wire entirely to benchmark).

We just need to give context for what motivated the current
implementation: network protocol avoidance where possible.

Given everything, do you have a suggestion for how to improve the
answer to question 6?  Happy to revise it however you like.

Here is the incremental patch I just came up with. Any better?

diff --git a/Documentation/filesystems/nfs/localio.rst b/Documentation/filesystems/nfs/localio.rst
index 4b6d63246479..5d652f637a97 100644
--- a/Documentation/filesystems/nfs/localio.rst
+++ b/Documentation/filesystems/nfs/localio.rst
@@ -120,12 +120,13 @@ FAQ
    using RPC, beneficial?  Is the benefit pNFS specific?
 
    Avoiding the use of XDR and RPC for file opens is beneficial to
-   performance regardless of whether pNFS is used. However adding a
-   requirement to go over the wire to do an open and/or close ends up
-   negating any benefit of avoiding the wire for doing the I/O itself
-   when we’re dealing with small files. There is no benefit to replacing
-   the READ or WRITE with a new open and/or close operation that still
-   needs to go over the wire.
+   performance regardless of whether pNFS is used. Especially when
+   dealing with small files its best to avoid going over the wire
+   whenever possible, otherwise it could reduce or even negate the
+   benefits of avoiding the wire for doing the small file I/O itself.
+   Given LOCALIO's requirements the current approach of having the
+   client perform a server-side file open, without using RPC, is ideal.
+   If in the future requirements change then we can adapt accordingly.
 
 7. Why is LOCALIO only supported with UNIX Authentication (AUTH_UNIX)?
 




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux