On Tue, 2020-08-25 at 13:28 -0700, John Hubbard wrote: > On 8/25/20 1:13 PM, Jeff Layton wrote: > > From: John Hubbard <jhubbard@xxxxxxxxxx> > > > > I think that's meant to be, "From: Jeff Layton <jlayton@xxxxxxxxxx>". Yeah, sorry -- artifact from squashing patches together. I noticed this after I sent it out. It's fixed in tree though. > This looks much nicer than what I came up with. :) > > > This special casing was added in 7ce469a53e71 (ceph: fix splice > > read for no Fc capability case). The confirm callback for ITER_PIPE > > expects that the page is Uptodate or a pagecache page and and returns > > an error otherwise. > > > > A simpler workaround is just to use the Uptodate bit, which has no > > meaning for anonymous pages. Rip out the special casing for ITER_PIPE > > and just SetPageUptodate before we copy to the iter. > > > > Cc: "Yan, Zheng" <ukernel@xxxxxxxxx> > > Cc: John Hubbard <jhubbard@xxxxxxxxxx> > > Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx> > > Suggested-by: Al Viro <viro@xxxxxxxxxxxxxxxxxx> > > --- > > fs/ceph/file.c | 71 +++++++++++++++++--------------------------------- > > 1 file changed, 24 insertions(+), 47 deletions(-) > > > > diff --git a/fs/ceph/file.c b/fs/ceph/file.c > > index fb3ea715a19d..ed8fbfe3bddc 100644 > > --- a/fs/ceph/file.c > > +++ b/fs/ceph/file.c > > @@ -863,6 +863,8 @@ static ssize_t ceph_sync_read(struct kiocb *iocb, struct iov_iter *to, > > size_t page_off; > > u64 i_size; > > bool more; > > + int idx; > > + size_t left; > > > > req = ceph_osdc_new_request(osdc, &ci->i_layout, > > ci->i_vino, off, &len, 0, 1, > > @@ -876,29 +878,13 @@ static ssize_t ceph_sync_read(struct kiocb *iocb, struct iov_iter *to, > > > > more = len < iov_iter_count(to); > > > > - if (unlikely(iov_iter_is_pipe(to))) { > > - ret = iov_iter_get_pages_alloc(to, &pages, len, > > - &page_off); > > +1 for removing a call to iov_iter_get_pages_alloc()! My list is shorter now. > Yep, and we got rid of some special-casing in ceph to boot. Thanks for bringing it to our attention! -- Jeff Layton <jlayton@xxxxxxxxxx>