On Thu, 2022-09-01 at 15:10 -0400, Chuck Lever wrote: > Since before the git era, NFSD has conserved the number of pages > held by each nfsd thread by combining the RPC receive and send > buffers into a single array of pages. This works because there are > no cases where an operation needs a large RPC Call message and a > large RPC Reply message at the same time. > > Once an RPC Call has been received, svc_process() updates > svc_rqst::rq_res to describe the part of rq_pages that can be > used for constructing the Reply. This means that the send buffer > (rq_res) shrinks when the received RPC record containing the RPC > Call is large. > > A client can force this shrinkage on TCP by sending a correctly- > formed RPC Call header contained in an RPC record that is > excessively large. The full maximum payload size cannot be > constructed in that case. > > Thanks to Aleksi Illikainen and Kari Hulkko for uncovering this > issue. > > Reported-by: Ben Ronallo <Benjamin.Ronallo@xxxxxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> > Signed-off-by: Chuck Lever <chuck.lever@xxxxxxxxxx> > --- > fs/nfsd/nfs3proc.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c > index a41cca619338..7a159785499a 100644 > --- a/fs/nfsd/nfs3proc.c > +++ b/fs/nfsd/nfs3proc.c > @@ -563,13 +563,14 @@ static void nfsd3_init_dirlist_pages(struct svc_rqst *rqstp, > { > struct xdr_buf *buf = &resp->dirlist; > struct xdr_stream *xdr = &resp->xdr; > - > - count = clamp(count, (u32)(XDR_UNIT * 2), svc_max_payload(rqstp)); > + unsigned int sendbuf = min_t(unsigned int, rqstp->rq_res.buflen, > + svc_max_payload(rqstp)); > > memset(buf, 0, sizeof(*buf)); > > /* Reserve room for the NULL ptr & eof flag (-2 words) */ > - buf->buflen = count - XDR_UNIT * 2; > + buf->buflen = clamp(count, (u32)(XDR_UNIT * 2), sendbuf); > + buf->buflen -= XDR_UNIT * 2; > buf->pages = rqstp->rq_next_page; > rqstp->rq_next_page += (buf->buflen + PAGE_SIZE - 1) >> PAGE_SHIFT; > > > Reviewed-by: Jeff Layton <jlayton@xxxxxxxxxx>