On Fri, 2017-06-23 at 16:48 -0400, Chuck Lever wrote: > > On Jun 21, 2017, at 10:31 AM, Chuck Lever <chuck.lever@xxxxxxxxxx> > > wrote: > > > > > > > > On Jun 20, 2017, at 7:35 PM, Trond Myklebust <trond.myklebust@pri > > > marydata.com> wrote: > > > > > > The following patches are intended to smooth out the page > > > writeback > > > performance by ensuring that we commit the data earlier on the > > > server. > > > > > > We assume that if something is starting writeback on the pages, > > > then > > > that process wants to commit the data as soon as possible, > > > whether it > > > is an application or just the background flush process. > > > We also assume that for streaming type processes, we don't want > > > to pause > > > the I/O in order to commit, so we don't want to rely on a counter > > > of > > > in-flight I/O to the entire inode going to zero. > > > > > > We therefore set up a monitor that counts the number of in-flight > > > writes for each call to nfs_writepages(). Once all the writes to > > > that > > > call to nfs_writepages has completed, we send the commit. Note > > > that this > > > mirrors the behaviour for O_DIRECT writes, where we similarly > > > track the > > > in-flight writes on a per-call basis. > > > > These are the same as the patches you sent May 16th? > > I am trying to get a little time to try them out. > > After applying these four patches, I ran a series of iozone > benchmarks with buffered and direct I/O. NFSv3 and NFSv4.0 > on RDMA. Exports were tmpfs and xfs on NVMe. > > I see about a 10% improvement in buffered write throughput, > no degradation elsewhere, and no crashes or other misbehav- > ior. Cool! Thanks for testing. > > xfstests passes with the usual few failures. > > Buffered write throughput is still limited to 1GBps when > targeting a tmpfs export on a 5.6GBps network. The server > isn't breaking a sweat, but the client appears to be hit- > ting some spin locks pretty hard. This is similar behavior > to before the patches were applied. Just out of curiosity, do you see the same behaviour with O_DIRECT against the tmpfs? There are 2 differences there: 1) no inode_lock(inode) contention. 2) slighly less inode->i_lock spinlock contention. > > > > Trond Myklebust (3): > > > NFS: Remove unused fields in the page I/O structures > > > NFS: Ensure we commit after writeback is complete > > > NFS: Fix commit policy for non-blocking calls to > > > nfs_write_inode() > > > > > > fs/nfs/pagelist.c | 5 ++-- > > > fs/nfs/write.c | 59 > > > +++++++++++++++++++++++++++++++++++++++++++++++- > > > include/linux/nfs_page.h | 2 +- > > > include/linux/nfs_xdr.h | 3 ++- > > > 4 files changed, 64 insertions(+), 5 deletions(-) > > > > > > -- > > > 2.9.4 > > > > > > -- > > > To unsubscribe from this list: send the line "unsubscribe linux- > > > nfs" in > > > the body of a message to majordomo@xxxxxxxxxxxxxxx > > > More majordomo info at http://vger.kernel.org/majordomo-info.htm > > > l > > > > -- > > Chuck Lever > > > > > > > > -- > > To unsubscribe from this list: send the line "unsubscribe linux- > > nfs" in > > the body of a message to majordomo@xxxxxxxxxxxxxxx > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- > Chuck Lever > > > -- Trond Myklebust Linux NFS client maintainer, PrimaryData trond.myklebust@xxxxxxxxxxxxxxx ��.n��������+%������w��{.n�����{��w���jg��������ݢj����G�������j:+v���w�m������w�������h�����٥