Re: [PATCH 07/11] SUNRPC: Use a cached RPC client and transport for rpcbind upcalls

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Nov 20, 2009, at 3:18 PM, Trond Myklebust wrote:
On Thu, 2009-11-05 at 13:23 -0500, Chuck Lever wrote:
The kernel's rpcbind client creates and deletes an rpc_clnt and its
underlying transport socket for every upcall to the local rpcbind
daemon.

When starting a typical NFS server on IPv4 and IPv6, the NFS service
itself does three upcalls (one per version) times two upcalls (one
per transport) times two upcalls (one per address family), making 12,
plus another one for the initial call to unregister previous NFS
services.  Starting the NLM service adds an additional 13 upcalls,
for similar reasons.

(Currently the NFS service doesn't start IPv6 listeners, but it will
soon enough).

Instead, let's create an rpc_clnt for rpcbind upcalls during the
first local rpcbind query, and cache it.  This saves the overhead of
creating and destroying an rpc_clnt and a socket for every upcall.

Signed-off-by: Chuck Lever <chuck.lever@xxxxxxxxxx>
---

net/sunrpc/rpcb_clnt.c | 78 ++++++++++++++++++++++++++++++++++++ +---------
net/sunrpc/sunrpc_syms.c |    3 ++
2 files changed, 65 insertions(+), 16 deletions(-)

diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
index 28f50da..1ec4a1a 100644
--- a/net/sunrpc/rpcb_clnt.c
+++ b/net/sunrpc/rpcb_clnt.c
@@ -20,6 +20,7 @@
#include <linux/in6.h>
#include <linux/kernel.h>
#include <linux/errno.h>
+#include <linux/spinlock.h>
#include <net/ipv6.h>

#include <linux/sunrpc/clnt.h>
@@ -110,6 +111,9 @@ static void rpcb_getport_done(struct rpc_task *, void *);
static void			rpcb_map_release(void *data);
static struct rpc_program	rpcb_program;

+static struct rpc_clnt *	rpcb_local_clnt;
+static struct rpc_clnt *	rpcb_local_clnt4;
+
struct rpcbind_args {
	struct rpc_xprt *	r_xprt;

@@ -163,7 +167,7 @@ static const struct sockaddr_in rpcb_inaddr_loopback = {
	.sin_port		= htons(RPCBIND_PORT),
};

-static struct rpc_clnt *rpcb_create_local(u32 version)
+static int rpcb_create_local(void)
{
	struct rpc_create_args args = {
		.protocol	= XPRT_TRANSPORT_UDP,
@@ -171,12 +175,37 @@ static struct rpc_clnt *rpcb_create_local(u32 version)
		.addrsize	= sizeof(rpcb_inaddr_loopback),
		.servername	= "localhost",
		.program	= &rpcb_program,
-		.version	= version,
+		.version	= RPCBVERS_2,
		.authflavor	= RPC_AUTH_UNIX,
		.flags		= RPC_CLNT_CREATE_NOPING,
	};
+	static DEFINE_SPINLOCK(rpcb_create_local_lock);
+	struct rpc_clnt *clnt, *clnt4;
+	int result = 0;
+
+	spin_lock(&rpcb_create_local_lock);
+	if (rpcb_local_clnt)
+		goto out;
+
+	clnt = rpc_create(&args);
+	if (IS_ERR(clnt)) {
+		result = -PTR_ERR(clnt);
+		goto out;
+	}

-	return rpc_create(&args);
+	clnt4 = rpc_bind_new_program(clnt, &rpcb_program, RPCBVERS_4);
+	if (IS_ERR(clnt4)) {
+		result = -PTR_ERR(clnt4);
+		rpc_shutdown_client(clnt);
+		goto out;
+	}
+
+	rpcb_local_clnt = clnt;
+	rpcb_local_clnt4 = clnt4;
+
+out:
+	spin_unlock(&rpcb_create_local_lock);
+	return result;
}

You can't have tested this. At the very least you cannot have done so
with spinlock debugging enabled...

I moved the rpcb_create_local_lock spinlock out of the function, enabled every spinlock checkbox I could under kernel hacking, and gave the guest 2 CPUs. The spinlock checker reported a problem almost immediately with XFS (even with just one virtual CPU), so I know it's enabled and working.

I can't reproduce any problems with the rpcbind upcall here. Do you have anything more specific?

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com



--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux