RE: Adventures in NFS re-exporting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Tue, Nov 24, 2020 at 08:35:06PM +0000, Daire Byrne wrote:
> > Sometimes I have seen clusters of 16 GETATTRs for the same file on the
> > wire with nothing else inbetween. So if the re-export server is the
> > only "client" writing these files to the originating server, why do we
> > need to do so many repeat GETATTR calls when using nconnect>1? And why
> > are the COMMIT calls required when the writes are coming via nfsd but
> > not from userspace on the re-export server? Is that due to some sort
> > of memory pressure or locking?
> >
> > I picked the NFSv3 originating server case because my head starts to
> > hurt tracking the equivalent packets, stateids and compound calls with
> > NFSv4. But I think it's mostly the same for NFSv4. The writes through
> > the re-export server lead to lots of COMMITs and (double) GETATTRs but
> > using nconnect>1 at least doesn't seem to make it any worse like it
> > does for NFSv3.
> >
> > But maybe you actually want all the extra COMMITs to help better
> > guarantee your writes when putting a re-export server in the way?
> > Perhaps all of this is by design...
> 
> Maybe that's close-to-open combined with the server's tendency to
open/close
> on every IO operation?  (Though the file cache should have helped with
that, I
> thought; as would using version >=4.0 on the final
> client.)
> 
> Might be interesting to know whether the nocto mount option makes a
> difference.  (So, add "nocto" to the mount options for the NFS mount that
> you're re-exporting on the re-export server.)
> 
> By the way I made a start at a list of issues at
> 
> 	http://wiki.linux-nfs.org/wiki/index.php/NFS_re-export
> 
> but I was a little vague on which of your issues remained and didn't take
much
> time over it.
> 
> (If you want an account on that wiki BTW I seem to recall you just have to
ask
> Trond (for anti-spam reasons).)

How much conversation about re-export has been had at the wider NFS
community level? I have an interest because Ganesha  supports re-export via
the PROXY_V3 and PROXY_V4 FSALs. We currently don't have a data cache though
there has been discussion of such, we do have attribute and dirent caches.

Looking over the wiki page, I have considered being able to specify a
re-export of a Ganesha export without encapsulating handles. Ganesha
encapsulates the export_fs handle in a way that could be coordinated between
the original server and the re-export so they would both effectively have
the same encapsulation layer.

I'd love to see some re-export best practices shared among server
implementations, and also what we can do to improve things when two server
implementations are interoperating via re-export.

Frank




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux