On Thu, Jul 06, 2017 at 10:02:16AM -0400, J. Bruce Fields wrote: > On Wed, Jul 05, 2017 at 10:36:17AM -0400, Chuck Lever wrote: > > I would like to get this resolved quickly so that my nfsd-rdma-for-4.13 > > series can be included in v4.13. > > It will be in 4.13. > > > Is the v1 patch acceptable, or do we need a comment here? If a comment > > is necessary, please provide the text you want to see. > > It's fine, we can do more later. > > The holdup is I'm still confused about the non-RDMA case: it looks to me > like we're not reserving quite enough for an unaligned maximum-length > read, but a pynfs test doesn't seem to be hitting any problem, so I > think I'm misunderstanding that case.... OK, I see my confusion: that final NULL that svc_alloc_arg sets isn't a "sentinel NULL" exactly. There's no code that assumes that rq_pages is NULL-terminated. We initialize that final entry to NULL just so that nfsd_splice_actor doesn't call put_page() on an unitialized value. That final entry is *only* needed in the splice case, to handle maximum-length non-page-aligned zero-copy reads. And in the splice case we don't need an actual allocated page, we're just going to put_page() whatever's there to replace with an entry from the page cache.... So, in the tcp case to handle a big read request, assuming 4k pages, we need 2 pages for the request and the head and tail of the reply, and, either: 256 allocated pages to copy data into in the readv case, or 257 (not necessarily allocated) array entries to store page array references to in the splice case. Hence a 259-entry array with only 258 allocated entries and one NULL. --b. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html