On Fri, Feb 2, 2018 at 10:09 AM, Chuck Lever <chuck.lever@xxxxxxxxxx> wrote: > > >> On Feb 2, 2018, at 10:49 AM, bfields@xxxxxxxxxxxx wrote: >> >> On Thu, Feb 01, 2018 at 08:59:18PM +0200, Boaz Harrosh wrote: >>> On 01/02/18 20:34, Chuck Lever wrote: <> >>>> This work was also presented at the SNIA Persistent Memory Summit >>>> last week. The use case of course is providing a user space >>>> platform for the development and deployment of memory-based file >>>> systems. The value-add of this kind of file system is ultra-low >>>> latency, which is a challenge for the current most popular such >>>> framework, FUSE. >>>> >>>> To start, I can think of three areas where specific questions might >>>> be entertained by LSF/MM attendees: >>>> >>>> - Spectre mitigations make this whole "user space filesystem" >>>> arrangement even slower, thanks to additional context switches >>>> between user space and the kernel. >> >> I think you're referring to the KPTI patches, which address Meltdown, >> not Spectre. > > I enabled KPTI on my NFS client and server systems in early > v4.15-rc, and didn't measure a change in latency or throughput. > > But with v4.15 final, which includes some Spectre mitigations, > write(2) on NFS files, for example, takes about 15us longer. > Since the RPC round-trip times did not increase, I presume this > extra latency is incurred on the client, where the user-kernel > boundary transitions occur. > > <shrug> That is interesting data. A loosely related question is whether ZUFS would be helpful in a typical NFS or SMB3 scenario (under the server) - especially with low latency RDMA (SMB3 Direct) connection to the server (in the case of SMB3, we would want to consider what this would look like with i/o from the same client, potentially the same file, coming in on multiple RDMA cards ). -- Thanks, Steve