Re: [PATCH v2] SUNRPC: Fix TCP receive code on archs with flush_dcache_page()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2019-01-03 at 11:16 +0100, Geert Uytterhoeven wrote:
> Hi Trond,
> 
> On Thu, Jan 3, 2019 at 7:14 AM Trond Myklebust <trondmy@xxxxxxxxx>
> wrote:
> > After receiving data into the page cache, we need to call
> > flush_dcache_page()
> > for the architectures that define it.
> > 
> > Fixes: 277e4ab7d530b ("SUNRPC: Simplify TCP receive code by
> > switching...")
> > Reported-by: Geert Uytterhoeven <geert@xxxxxxxxxxxxxx>
> > Signed-off-by: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx>
> > Cc: stable@xxxxxxxxxxxxxxx # v4.20
> 
> Thanks for your patch!
> 
> > --- a/net/sunrpc/xprtsock.c
> > +++ b/net/sunrpc/xprtsock.c
> > @@ -48,6 +48,7 @@
> >  #include <net/udp.h>
> >  #include <net/tcp.h>
> >  #include <linux/bvec.h>
> > +#include <linux/highmem.h>
> >  #include <linux/uio.h>
> > 
> >  #include <trace/events/sunrpc.h>
> > @@ -380,6 +381,27 @@ xs_read_discard(struct socket *sock, struct
> > msghdr *msg, int flags,
> >         return sock_recvmsg(sock, msg, flags);
> >  }
> > 
> > +#if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE
> > +static void
> > +xs_flush_bvec(const struct bio_vec *bvec, size_t count, size_t
> > seek)
> > +{
> > +       struct bvec_iter bi, __start = {
> 
> As for_each_bvec() assigns __start to bi, and you don't need __start
> afterwards, both variables can be merged into a single one.
> But perhaps that would make too many assumptions about the
> implementation of for_each_bvec()?

No, that's a good suggestion. I've sent out a (hopefully) final v3 with
that change.

> 
> > +               .bi_size = count,
> > +       };
> > +       struct bio_vec bv;
> > +
> > +       bvec_iter_advance(bvec, &__start, seek & PAGE_MASK);
> > +
> > +       for_each_bvec(bv, bvec, bi, __start)
> > +               flush_dcache_page(bv.bv_page);
> > +}
> > +#else
> > +static inline void
> > +xs_flush_bvec(const struct bio_vec *bvec, size_t count, size_t
> > seek)
> > +{
> > +}
> > +#endif
> > +
> >  static ssize_t
> >  xs_read_xdr_buf(struct socket *sock, struct msghdr *msg, int
> > flags,
> >                 struct xdr_buf *buf, size_t count, size_t seek,
> > size_t *read)
> > @@ -413,6 +435,7 @@ xs_read_xdr_buf(struct socket *sock, struct
> > msghdr *msg, int flags,
> >                                 seek + buf->page_base);
> >                 if (ret <= 0)
> >                         goto sock_err;
> > +               xs_flush_bvec(buf->bvec, ret, seek + buf-
> > >page_base);
> >                 offset += ret - buf->page_base;
> >                 if (offset == count || msg->msg_flags &
> > (MSG_EOR|MSG_TRUNC))
> >                         goto out;
> 
> I don't understand the code well enough to see why the call to
> xs_flush_bvec() is needed in this branch only, but it does fix TCP
> NFS on RBTX4927, so
> Tested-by: Geert Uytterhoeven <geert@xxxxxxxxxxxxxx>

Thanks!
  Trond

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@xxxxxxxxxxxxxxx



[Index of Archives]     [Linux MIPS Home]     [LKML Archive]     [Linux ARM Kernel]     [Linux ARM]     [Linux]     [Git]     [Yosemite News]     [Linux SCSI]     [Linux Hams]

  Powered by Linux