Re: Adventures in NFS re-exporting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 12, 2020 at 01:01:24PM +0000, Daire Byrne wrote:
> 
> ----- On 9 Nov, 2020, at 16:02, bfields bfields@xxxxxxxxxxxx wrote:
> > On Wed, Oct 21, 2020 at 10:33:52AM +0100, Daire Byrne wrote:
> >> Trond has posted some (v3) patches to emulate lookupp for NFSv3 (a million
> >> thanks!) so I applied them to v5.9.1 and ran some more tests using that on the
> >> re-export server. Again, I just pathologically dropped inode & dentry caches
> >> every second on the re-export server (vfs_cache_pressure=100) while a client
> >> looped through some application loading tests.
> >> 
> >> Now for every combination of re-export (NFSv3 -> NFSv4.x or NFSv4.x -> NFSv3), I
> >> no longer see any stale file handles (/proc/net/rpc/nfsd) when dropping inode &
> >> dentry caches (yay!).
> >> 
> >> However, my assumption that some of the input/output errors I was seeing were
> >> related to the estales seems to have been misguided. After running these tests
> >> again without any estales, it now looks like a different issue that is unique
> >> to re-exporting NFSv3 from an NFSv4.0 originating server (either Linux or
> >> Netapp). The lookups are all fine (no estale) but reading some files eventually
> >> gives an input/output error on multiple clients which remain consistent until
> >> the re-export nfs-server is restarted. Again, this only occurs while dropping
> >> inode + dentry caches.
> >> 
> >> So in summary, while continuously dropping inode/dentry caches on the re-export
> >> server:
> > 
> > How continuously, exactly?
> > 
> > I recall that there are some situations where the best the client can do
> > to handle an ESTALE is just retry.  And that our code generally just
> > retries once and then gives up.
> > 
> > I wonder if it's possible that the client or re-export server can get
> > stuck in a situation where they can't guarantee forward progress in the
> > face of repeated ESTALEs.  I don't have a specific case in mind, though.
> 
> I was dropping caches every second in a loop on the NFS re-export server. Meanwhile a large python application that takes ~15 seconds to complete was also looping on a client of the re-export server. So we are clearing out the cache many times such that the same python paths are being re-populated many times.
> 
> Having just completed a bunch of fresh cloud rendering with v5.9.1 and Trond's NFSv3 lookupp emulation patches, I can now revise my original list of issues that others will likely experience if they ever try to do this craziness:
> 
> 1) Don't re-export NFSv4.0 unless you set vfs_cache_presure=0 otherwise you will see random input/output errors on your clients when things are dropped out of the cache. In the end we gave up on using NFSv4.0 with our Netapps because the 7-mode implementation seemed a bit flakey with modern Linux clients (Linux NFSv4.2 servers on the other hand have been rock solid). We now use NFSv3 with Trond's lookupp emulation patches instead.

So,

		NFSv4.2			  NFSv4.2
	client --------> re-export server -------> original server

works as long as both servers are recent Linux, but when the original
server is Netapp, you need the protocol used in both places to be v3, is
that right?

> 2) In order to better utilise the re-export server's client cache when re-exporting an NFSv3 server (using either NFSv3 or NFSv4), we still need to use the horrible inode_peek_iversion_raw hack to maintain good metadata performance for large numbers of clients. Otherwise each re-export server's clients can cause invalidation of the re-export server client cache. Once you have hundreds of clients they all combine to constantly invalidate the cache resulting in an order of magnitude slower metadata performance. If you are re-exporting an NFSv4.x server (with either NFSv3 or NFSv4.x) this hack is not required.

Have we figured out why that's required, or found a longer-term
solution?  (Apologies, the memory of the earlier conversation is
fading....)

--b.



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux