On Sun, 2022-07-03 at 22:56 -0700, Christoph Hellwig wrote: > On Fri, Jul 01, 2022 at 06:30:11AM -0400, Jeff Layton wrote: > > Currently, we take an iov_iter from the netfs layer, turn that into an > > array of pages, and then pass that to the messenger which eventually > > turns that back into an iov_iter before handing it back to the socket. > > > > This patchset adds a new ceph_msg_data_type that uses an iov_iter > > directly instead of requiring an array of pages or bvecs. This allows > > us to avoid an extra allocation in the buffered read path, and should > > make it easier to plumb in write helpers later. > > > > For now, this is still just a slow, stupid implementation that hands > > the socket layer a page at a time like the existing messenger does. It > > doesn't yet attempt to pass through the iov_iter directly. > > > > I have some patches that pass the cursor's iov_iter directly to the > > socket in the receive path, but it requires some infrastructure that's > > not in mainline yet (iov_iter_scan(), for instance). It should be > > possible to something similar in the send path as well. > > Btw, is there any good reason to not simply replace ceph_msg_data > with an iov_iter entirely? > Not really, no. What I'd probably do is change the existing osd_req_op_* callers to use the new iov_iter msg_data type first, and then once they all were you could phase out the use of struct ceph_msg_data altogether. -- Jeff Layton <jlayton@xxxxxxxxxx>