On Thu, 2022-02-24 at 09:15 -0500, Benjamin Coddington wrote: > On 23 Feb 2022, at 16:12, trondmy@xxxxxxxxxx wrote: > > > From: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx> > > > > Instead of relying on counting the page offsets as we walk through > > the > > page cache, switch to calculating them algorithmically. > > > > Signed-off-by: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx> > > --- > > fs/nfs/dir.c | 18 +++++++++++++----- > > 1 file changed, 13 insertions(+), 5 deletions(-) > > > > diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c > > index 8f17aaebcd77..f2258e926df2 100644 > > --- a/fs/nfs/dir.c > > +++ b/fs/nfs/dir.c > > @@ -248,17 +248,20 @@ static const char > > *nfs_readdir_copy_name(const > > char *name, unsigned int len) > > return ret; > > } > > > > +static size_t nfs_readdir_array_maxentries(void) > > +{ > > + return (PAGE_SIZE - sizeof(struct nfs_cache_array)) / > > + sizeof(struct nfs_cache_array_entry); > > +} > > + > > Why the choice to use a runtime function call rather than the > compiler's > calculation? I suspect that the end result is the same, as the > compiler > will optimize it away, but I'm curious if there's a good reason for > this. > The comparison is more efficient because no pointer arithmetic is needed. As you said, the above function always evaluates to a constant, and the array->size has been pre-calculated. -- Trond Myklebust Linux NFS client maintainer, Hammerspace trond.myklebust@xxxxxxxxxxxxxxx