> On Jan 25, 2016, at 4:19 PM, Chuck Lever <chuck.lever@xxxxxxxxxx> wrote: > > I'd like to propose a discussion of how to take advantage of > persistent memory in network-attached storage scenarios. > > RDMA runs on high speed network fabrics and offloads data > transfer from host CPUs. Thus it is a good match to the > performance characteristics of persistent memory. > > Today Linux supports iSER, SRP, and NFS/RDMA on RDMA > fabrics. What kind of changes are needed in the Linux I/O > stack (in particular, storage targets) and in these storage > protocols to get the most benefit from ultra-low latency > storage? > > There have been recent proposals about how storage protocols > and implementations might need to change (eg. Tom Talpey's > SNIA proposals for changing to a push data transfer model, > Sagi's proposal to utilize DAX under the NFS/RDMA server, > and my proposal for a new pNFS layout to drive RDMA data > transfer directly). > > The outcome of the discussion would be to understand what > people are working on now and what is the desired > architectural approach in order to determine where storage > developers should be focused. > > This could be either a BoF or a session during the main > tracks. There is sure to be a narrow segment of each > track's attendees that would have interest in this topic. > > -- > Chuck Lever Chuck, One difference on targets is that some NVM/persistent memory may be byte-addressable while other NVM is only block addressable. Another difference is that NVMe-over-Fabrics will allow remote access of the target’s NVMe devices using the NVMe API. Scott��.n��������+%������w��{.n�����{��w���jg��������ݢj����G�������j:+v���w�m������w�������h�����٥