On 4/27/2014 3:37 PM, Chuck Lever wrote:
<SNIP>
Why not first create a new id+qp and assign them - and then destroy the old id+qp?
see SRP related section: ib_srp.x:srp_create_target_ib()
Anyway it is indeed important to guarantee that no xmit flows happens concurrently to that,
and cleanups are processed synchronously and in-order.
I posted a patch on Friday that should provide that serialization.
2. If a QP is present but disconnected, posting LOCAL_INV won’t work.
That leaves buffers (and page cache pages, potentially) registered.
That could be addressed with LINV+FRMR. But...
3. The client should not leave page cache pages registered indefinitely.
Both LINV+FRMR and our current approach depends on having a working
QP _at_ _some_ _point_ … but the client simply can’t depend on that.
What happens if an NFS server is, say, destroyed by fire while there
are active client mount points? What if the HCA’s firmware is
permanently not allowing QP creation?
Again, I don't understand why you can't dereg_mr().
How about allocating the FRMR pool *after* the QP was successfully created/connected (makes sense as the FRMRs are
not usable until then), and destroy/cleanup the pool before the QP is disconnected/destroyed. it also makes sense as they
must match PDs.
It’s not about deregistration, but rather about invalidation, I was
confused.
OK got it.
xprt_rdma_free() invalidates and then frees the chunks on RPC chunk
lists. We just need to see that those invalidations can be successful
while the transport is disconnected.
They can't be completed though. Can't you just skip invalidation? will
be done when FRMR is reused.
Sorry to be difficult, but I still don't understand why invalidation
must occur in this case.
I understand that even in the error state, a QP should allow consumers
to post send WRs to invalidate FRMRs…?
Its allowed, they won't execute though (I'll be surprised if they will).
AFAIK posting on a QP in error state has only one use-case - post a
colored WQE to know that FLUSH errors has ended.
The other case is whether the consumer can _replace_ a QP with a fresh
one, and still have invalidations succeed, even if the transport remains
disconnected, once waiting RPCs are awoken.
Which invalidations succeeded and which didn't are known - so I don't
see the problem here.
The only thing is the corner case that Steve indicated wrt flush errors,
but if I recall correctly
posting LINV on an invalidated MR is allowed.
An alternative would be to invalidate all waiting RPC chunk lists on a
transport as soon as the QP goes to error state but before it is
destroyed, and fastreg the chunks again when waiting RPCs are
remarshalled.
I think the goals are:
1. Avoid fastreg on an FRMR that is already valid
2. Avoid leaving FRMRs valid indefinitely (preferably just long enough
to execute the RPC request, and no longer)
(1) is a non-issue for INV+FASTREG.
Can you please explain your concern on (2)? is it security (server can
keep doing RDMA)?
because you have remote invalidate for that (server can implement
SEND+INVALIDATE).
Having said that, I probably don't see the full picture here like you
guys so I might be missing some things.
Sagi.
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html