----- On 16 Nov, 2020, at 15:53, bfields bfields@xxxxxxxxxxxx wrote: > On Sat, Nov 14, 2020 at 12:57:24PM +0000, Daire Byrne wrote: >> Now if anyone has any ideas why all the read calls to the originating >> server are limited to a maximum of 128k (with rsize=1M) when coming >> via the re-export server's nfsd threads, I see that as the next >> biggest performance issue. Reading directly on the re-export server >> with a userspace process issues 1MB reads as expected. It doesn't >> happen for writes (wsize=1MB all the way through) but I'm not sure if >> that has more to do with async and write back caching helping to build >> up the size before commit? > > I'm not sure where to start with this one.... > > Is this behavior independent of protocol version and backend server? It seems to the case for all combinations of backend versions and re-export versions. But it does look like it is related to readahead somehow. The default for a client mount is 128k .... I just increased it to 1024 on the client mount of the originating server on the re-export server and now it's doing the expected 1MB (rsize) read requests back to onprem from the clients all the way through. i.e. echo 1024 > /sys/class/bdi/0:52/read_ahead_kb So, there is a difference in behaviour when reading from the client mount with user space processes or the knfsd threads on the re-export server. Daire