On Thu, Apr 20, 2017 at 10:57:10AM +1000, NeilBrown wrote: > I realise NFSv4 compounds don't have that limitation. > I wondered what code in the NFSv4 server ensures that we don't try to use > more memory than was allocated. > > I notice lots of calls to xdr_reserve_space() in nfs4xdr.c. Many of them > trigger nfserr_resource when xdr_reserve_space() returns NULL. > But not all. > nfsd4_encode_readv() just pops up a warning. Once. Then will > (eventually) de-reference the NULL pointer and crash. > So presumably it really cannot happen (should be a BUG_ON anyway)? > So why can this not happen? > I see that nfsd4_encode_read() limits the size of the read to > xdr->buf->buflen - xdr->buf->len > and nfsd4_encode_readdir() does a similar thing when computing > bytes_left. > > So, it is more careful about using the allocated pages than v2/3 is. Yes. The v4 code was written from the start with overflow checks preceding any encode or decode. And I tried to think this all through carefully when I rewrote the encoding side a few years ago. But I don't think that really got much review, and test coverage is poor (a big thanks here to the synpsys people for their fuzzing work), so additional skeptical eyes are welcomed.... There's a lot of tricky hand-written code here handling data from the network. Every now and then somebody brings up the idea of trying to autogenerate it, as is traditionally done for rpc programs. No idea how practical that is. --b. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html