David Laight <David.Laight@xxxxxxxxxx> wrote: > On the face of it that passes a largely uninitialised 'struct msghdr' > to cifs_readv_from_socket() in order to pass an iov_iter. > That seems to be asking for trouble. > > If cifs_readv_from_socket() only needs the iov_iter then wouldn't > it be better to do the wrapper the other way around? > (Probably as an inline function) > Something like: > > int > cifs_readv_from_socket(struct TCP_Server_Info *server, struct msghdr *smb_msg) > { > return cifs_read_iter_from_socket(server, &smb_msg->msg_iter, smb_msg->msg_iter.count); > } > > and then changing cifs_readv_from_socket() to just use the iov_iter. Yeah. And smbd_recv() only cares about the iterator too. > I'm also not 100% sure that taking a copy of an iov_iter is a good idea. It shouldn't matter as the only problematic iterator is ITER_PIPE (advancing that has side effects) - and splice_read is handled specially by patch 4. The problem with splice_read with the way cifs works is that it likes to subdivide its read/write requests across multiple reqs and then subsubdivide them if certain types of failure occur. But you can't do that with ITER_PIPE. I build an ITER_BVEC from ITER_PIPE, ITER_UBUF and ITER_IOVEC in the top levels with pins inserted as appropriate and hand the ITER_BVEC down. For user-backed iterators it has to be done this way because the I/O may get shuffled off to a different thread. Reqs can then just copy the BVEC/XARRAY/KVEC and narrow the region because the master request at the top does holds the vector list and the top cifs level or the caller above the vfs (eg. sys_execve) does what is necessary to retain the pages. David