On Wed, Jan 9, 2019 at 3:38 PM Trond Myklebust <trondmy@xxxxxxxxxxxxxxx> wrote: > > Hi Olga > > On Wed, 2019-01-09 at 14:39 -0500, Olga Kornievskaia wrote: > > Hi Trond, > > > > Do you have any plans for this patch set? > > > > I applied the patches on top of 4.20-rc7 kernel I had and tested it > > (linux to linux) with iozone on the hardware (40G link with Mellanox > > CX-5 card). > > > > Results seem to show read IO improvement from 1.9GB to 3.9GB. Write > > IO > > speed seems to be the same (disk bound I'm guessing). I also tried > > mounting tmpfs. Same thing. > > > > Seems like a useful feature to include? > > Thanks for testing this. > > Was this your own port of the original patches, or have you taken my > branch from > http://git.linux-nfs.org/?p=trondmy/linux-nfs.git;a=shortlog;h=refs/heads/multipath_tcp > ? I didn't know one existed. I just took original patches from the mailing list and applied to 4.20-rc7 (they applied without issues that I recall). > Either way I appreciate the data point. I haven't seen too many other > reports of performance improvements, and that's the main reason why > this patchset has languished. > > 3.9GB/s would be about 31Gbps, so that is not quite wire speed, but > certainly a big improvement on 1.9GB/s. Maybe it's the lab setup that's not tuned to achieve max performance. > I'm a little surprised, tbough, > that the write performance did not improve with the tmpfs. Was all this > using aio+dio on the client? It is what ever "iozone -i0 -i1 -s52m -y2k -az -I" translates to. To clarify by "didn't improve" I didn't mean the write speed with disk is same as write speed with tmpfs (disk write speed is ~168MB and tmpfs write speed is 1.47GB). I meant that it seems with nconnect=1 it achieves the "max" performance of disk/tmpfs. > > -- > Trond Myklebust > Linux NFS client maintainer, Hammerspace > trond.myklebust@xxxxxxxxxxxxxxx > >