Re: Kernel fast memory registration API proposal [RFC]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 16, 2015 at 04:07:04PM -0400, Chuck Lever wrote:

> The MRs are registered only for remote read. I don’t think
> catastrophic harm can occur on the client in this case if the
> invalidation and DMA sync comes late. In fact, I’m unsure why
> a DMA sync is even necessary as the MR is invalidated in this
> case.

For RDMA, the worst case would be some kind of information leakage or
machine check halt.

For read side the DMA API should be called before posting the FRWR, no
completion side issues.

> In the case of incoming data payloads (NFS READ) the DMA sync
> ordering is probably an important issue. The sync has to happen
> before the ULP can touch the data, 100% of the time.

Absolultely, the sync is critical.

> That could be addressed by performing a DMA sync on the write
> list or reply chunk MRs right in the RPC reply handler (before
> xprt_complete_rqst).

That sounds good to me, much more in line with what I'd expect to
see. The fmr unmap and invalidate post should also be in the reply
handler (for flow control reasons, see below)

> > The only absolutely correct way to run the RDMA stack is to keep track
> > of SQ/SCQ space directly, and only update that tracking by processing
> > SCQEs.
> 
> In other words, the only time it is truly safe to do a post_send is
> after you’ve received a send completion that indicates you have
> space on the send queue.

Yes.

Use a scheme where you supress signaling and use the SQE accounting to
request a completion entry and signal around every 1/2 length of the
SQ.

Use the WRID in some way to encode the # SQEs each completion
represents.

I've used a scheme where the wrid is a wrapping index into
an array of SQ length long, that holds any meta information..

That makes it trivial to track SQE accounting and avoids memory
allocations for wrids.

Generically:

  posted_sqes -= (wc->wrid - last_wrid);
  for (.. I = last_wrid; I != wc->wrid; ++I)
    complete(wr_data[I].ptr);

Many other options, too.

-----

There is a bit more going on too, *technically* the HCA owns the
buffer until a SCQE is produced. The recv proves the peer will drop
any re-transmits of the message, but it doesn't prove that the local
HCA won't create a re-transmit. Lost acks or other protocol weirdness
could *potentially* cause buffer re-read in the general RDMA
framework.

So if you use recv to drive re-use of the SEND buffer memory, it is
important that the SEND buffer remain full of data to send to that
peer and not be kfree'd, dma unmapped, or reused for another peer's
data.

kfree/dma unmap/etc may only be done on a SEND buffer after seeing a
SCQE proving that buffer is done, or tearing down the QP and halting
the send side.

> The problem then is how do you make the RDMA consumer wait until
> there is send queue space. I suppose the xprt_complete_rqst()

It depends on the overall ULP design..

For work that is created by the recv queue (ie invalidates, new posts,
etc) I've successfully simply stopped polling the rq if the sq doesn't
have room to issue the largest single compound a recv would require.

Ie on the client side a recv may require issuing an INVALIDATE, so
when the SQ fills, stop processing recv.

> could be postponed in this case, or simulated xprt congestion
> could be used to prevent starting new RPCs while the send queue
> is full.

Then the other half is async new work from someplace else, like the rq
case above, stop async work from advancing if the SQE cannot hold the
largerst required compound. Sounds like this is 2 (FMWR, SEND) for NFS
client.

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux