On Wed, May 6, 2015 at 6:54 PM, Chuck Lever <chuck.lever@xxxxxxxxxx> wrote: > Hi Devesh- > > On May 6, 2015, at 7:37 AM, Devesh Sharma <devesh.sharma@xxxxxxxxxxxxx> wrote: > >> On Mon, May 4, 2015 at 11:27 PM, Chuck Lever <chuck.lever@xxxxxxxxxx> wrote: >>> >>> Print an error during transport destruction if ib_dealloc_pd() >>> fails. This is a sign that xprtrdma orphaned one or more RDMA API >>> objects at some point, which can pin lower layer kernel modules >>> and cause shutdown to hang. >>> >>> Signed-off-by: Chuck Lever <chuck.lever@xxxxxxxxxx> >>> --- >>> net/sunrpc/xprtrdma/verbs.c | 4 ++-- >>> 1 file changed, 2 insertions(+), 2 deletions(-) >>> >>> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c >>> index 4870d27..0cc4617 100644 >>> --- a/net/sunrpc/xprtrdma/verbs.c >>> +++ b/net/sunrpc/xprtrdma/verbs.c >>> @@ -710,8 +710,8 @@ rpcrdma_ia_close(struct rpcrdma_ia *ia) >>> } >>> if (ia->ri_pd != NULL && !IS_ERR(ia->ri_pd)) { >>> rc = ib_dealloc_pd(ia->ri_pd); >>> - dprintk("RPC: %s: ib_dealloc_pd returned %i\n", >>> - __func__, rc); >> >> Should we check for EBUSY explicitly? other then this is an error in >> vendor specific ib_dealloc_pd() > > Any error return means ib_dealloc_pd() has failed, right? Doesn’t that > mean the PD is still allocated, and could cause problems later? Yes, you are correct, I was thinking ib_dealloc_pd() has a refcount implemented in the core layer, thus if the PD is used by any resource, it will always fail with -EBUSY. .With emulex adapter it is possible to fail dealloc_pd with ENOMEM or EIO in cases where device f/w is not responding etc. this situation do not represent PD is actually in use. > > >>> + if (rc) >>> + pr_warn("rpcrdma: ib_dealloc_pd status %i\n", rc); >>> } >>> } > > -- > Chuck Lever > chuck[dot]lever[at]oracle[dot]com > > > -- -Regards Devesh -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html