When the Linux server writes an odd-length data item into a Write chunk, it finishes with XDR pad bytes. If the data item is smaller than the Write chunk, the pad bytes are written at the end of the data item, but still inside the chunk. That can expose these zero bytes to the RPC consumer on the client. XDR pad bytes are inserted in order to preserve the XDR alignment of the next XDR data item in an XDR stream. But Write chunks do not appear in the payload XDR stream, and only one data item is allowed in each chunk. So XDR padding is unneeded. Thus the server should not write XDR pad bytes in Write chunks. I believe this is not an operational problem. Short NFS READs that are returned in a Write chunk would be affected by this issue. But they happen only when the read request goes past the EOF. Those are zero bytes anyway, and there's no file data in the client's buffer past EOF. Otherwise, current NFS clients provide a separate extra segment for catching XDR padding. If an odd-size data item fills the chunk, then the XDR pad will be written to the extra segment. Signed-off-by: Chuck Lever <chuck.lever@xxxxxxxxxx> --- net/sunrpc/xprtrdma/svc_rdma_sendto.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c index df57f3c..8591314 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -308,7 +308,7 @@ static int send_write_chunks(struct svcxprt_rdma *xprt, struct svc_rqst *rqstp, struct svc_rdma_req_map *vec) { - u32 xfer_len = rqstp->rq_res.page_len + rqstp->rq_res.tail[0].iov_len; + u32 xfer_len = rqstp->rq_res.page_len; int write_len; u32 xdr_off; int chunk_off; -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html