On Thu, Jan 2, 2020 at 7:19 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > On Fri, Dec 20, 2019 at 04:50:44PM +0100, Jack Wang wrote: > > Hi all, > > > > here is V5 of the RTRS (former IBTRS) rdma transport library and the > > corresponding RNBD (former IBNBD) rdma network block device. > > > > Main changes are the following: > > 1. Fix the security problem pointed out by Jason > > 2. Implement code-style/readability/API/etc suggestions by Bart van Assche > > 3. Rename IBTRS and IBNBD to RTRS and RNBD accordingly > > 4. Fileio mode support in rnbd-srv has been removed. > > > > The main functional change is a fix for the security problem pointed out by > > Jason and discussed both on the mailing list and during the last LPC RDMA MC 2019. > > On the server side we now invalidate in RTRS each rdma buffer before we hand it > > over to RNBD server and in turn to the block layer. A new rkey is generated and > > registered for the buffer after it returns back from the block layer and RNBD > > server. The new rkey is sent back to the client along with the IO result. > > The procedure is the default behaviour of the driver. This invalidation and > > registration on each IO causes performance drop of up to 20%. A user of the > > driver may choose to load the modules with this mechanism switched > > off > > So, how does this compare now to nvme over fabrics? > > I recall there were questiosn why we needed yet another RDMA block > transport? > > Jason Performance results for the v5.5-rc1 kernel are here: link: https://github.com/ionos-enterprise/ibnbd/tree/develop/performance/v5-v5.5-rc1 Some workloads RNBD are faster, some workloads NVMeoF are faster.